Mesure locale de la vitesse de l’onde de pression par l’IRM dynamique.
Large scale analysis for spiking data
1. Montecarlo-based method for
large scale network analysis
with Gibbs distribution
Hassan Nasser & Bruno Cessac
Neuromathcomp team – INRIA Sophia-Antipolis
3. Observable
● Observable: a function which associates to a
spike train a real number.
● Ex: Firing rate, pairwise correlation.
Firing rate
Neurons k1 fires at
n1 while the neuron
k2 fires at the time n2
Neurons k1 fires at
k1 while the neurons
n2 is silent at time n2
5. Gibbs potential
● Gibbs potential: represents a model of spike
train where observables and observable
coefficients (parameters) are represented.
Potential Parameters Observable
(weights)
6. Modeling a spike train with a
Gibbs potential
● Given a spike train (and its empirical averages
= empirical probability distribution).
● Given a Gibbs potential and the associated
Gibbs probability distribution.
● Our aim is to find parameters such that the
KL distance between the empirical and
theoretical Gibbs distribution is minimal -->
Maximal entropy.
7. Relevant previous works
● Vasquez et al. 2012 showed that Gibbs
potential models with memory reproduce more
preciselt the statistical distibution of a spike
train --> Small size networks.
● Tkacik et al. 2008 showed a Montecarlo based
method to reproduce the statistics for an Ising
models --> Large scale network.
8. Maximum entropy Vs
montecarlo
● Maximum entropy
● Precise (Solving with the transfer matrix).
● Computation time grows exponentially with the
network and memory size (Computation of
transfer matrix).
● Montecarlo:
● Not precise (Error depends on the chain length).
● Fast (Liner growth with network and memory
size).
9. My work
● Reproducing the statistics for large networks
using Montecarlo.
● I began with implementing the classical
Montecarlo in order to reproduce the statistics
but the problem didn't converge.
10. ● The situtation is different when in the spatio-
temporal case, since:
●
● The normalization factor changes in the spatio-
temporal case. However, the classical
Montecarlo works for Ising and Memory = 1
models (Taking into account a detailed balance
assumption).
11. ● The situtation is different when in the spatio-
temporal case, since:
●
● The normalization factor changes in the spatio-
temporal case. However, the classical
Montecarlo works for Ising and Memory = 1
models (Taking into account a detailed balance
assumption).
12. ● The situtation is different when in the spatio-
temporal case, since:
●
● The normalization factor changes in the spatio-
temporal case. However, the classical
Montecarlo works for Ising and Memory = 1
models (Taking into account a detailed balance
assumption).
14. How it works?
● Given a real spike train.
● We generate a random spike train of length
(Ntimes) and a random set of parameters.
● We choose random events in the raster and we
flip the event (0 --> 1 or 1-->0) and we compute
the difference of energy.
● If the energy increase, we accept the new state.
Otherwise, we reject.
● We do this flipping (Nflip) times.
15. Reproducing the statistics
● With a simple know-distribution raster, where
empirical probabilities are known, we could
generate another raster with the same
probability distribution.
Theoretical Probability
Empirical Probability
16. How fast is Montecarlo?
ial
nt
o ne
Exp
Linerar
19. ● What we presented was the application of
Montecarlo for small size networks.
● Why?
● Because with small size networks, we can
compute the error committed on the observable
computation.
● Why?
● Because the maximal entropy can compute
observable averages only for small size
networks.
20. Bruno invented a “particular
potential”
● A potential where observable averages could
be computed analytically even for large
networks.
● This potential is a set of pairs of events
(neuron, time).
This is the estimated average with Montecarlo.
We can compute the error by comparing this estimated average
with the empirical average given by the real raster
22. What we did - what we haven't
done yet.
● Implementing Montecarlo based algorithm that
gives an estimation for a spike train statistics on
a Gibbs distribution for large networks and for
models with memory
● Now we can deal with large networks.
● We want to compute the parameters.
● Application of our method on really real data.