SlideShare une entreprise Scribd logo
1  sur  36
Télécharger pour lire hors ligne
University of Nice Sophia-antipolis
             INRIA - I3S

                Midterm Report
Parameter Tuning in virtual retina
    using synaptic plasticity
                      Author:
                   Hassan Nasser
  Supervisor:
  Dr. Bruno Cessac                               tutor:
  Dr. Thierry Vieville               Dr. Marc Antonini
  Dr. Pierre Kornprobst




                   August 13, 2010
Contents

1 The    Virtual Retina                                                                                                                    4
  1.1    Introduction . . . . . . . . .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .    4
  1.2    The Vertebrate Retina . . .      .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .    4
  1.3    The underlying retina model      .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .    9
  1.4    The VirtualRetina software       .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   11
  1.5    Simulation via VirtualRetina     .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   13
  1.6    Conclusion . . . . . . . . . .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   15

2 Performing statistics with real and VirtualRetina data                                                                                  17
  2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . .                                        .   .   .   .   .   .   .   17
  2.2 What is a spike train? . . . . . . . . . . . . . . . . . . . .                                          .   .   .   .   .   .   .   17
  2.3 Performing statistics . . . . . . . . . . . . . . . . . . . . .                                         .   .   .   .   .   .   .   19
      2.3.1 The statistical models . . . . . . . . . . . . . . . .                                            .   .   .   .   .   .   .   20
  2.4 The EnaS Library . . . . . . . . . . . . . . . . . . . . . . .                                          .   .   .   .   .   .   .   21
  2.5 Performing statistics with the EnaS library and results . .                                             .   .   .   .   .   .   .   22
      2.5.1 Synthetic data . . . . . . . . . . . . . . . . . . . . .                                          .   .   .   .   .   .   .   23
      2.5.2 Real data . . . . . . . . . . . . . . . . . . . . . . .                                           .   .   .   .   .   .   .   25
      2.5.3 VirtualRetina data . . . . . . . . . . . . . . . . . .                                            .   .   .   .   .   .   .   27
  2.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . .                                        .   .   .   .   .   .   .   30

3 Infering connectivity between Retinal Ganglion Cells                                                                                    31
  3.1 introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                                                      31
  3.2 The connectivity between ganglion cells from real data acquisition . . .                                                            31




                                                  1
List of Figures

 1.1    The vertebrate Eye . . . . . . . . . . . . . . . . . .    . . .   . . . .   .   .   .   .    5
 1.2    The visual pathways . . . . . . . . . . . . . . . . .     . . .   . . . .   .   .   .   .    5
 1.3    Up: the action potential. Down: The corresponding         spike   train .   .   .   .   .    6
 1.4    The different cell types in the vertebrate retina . .      . . .   . . . .   .   .   .   .    7
 1.5    The different layers in the retina . . . . . . . . . . .   . . .   . . . .   .   .   .   .    8
 1.6    The different layers in the retina . . . . . . . . . . .   . . .   . . . .   .   .   .   .    8
 1.7    The different stages of the retina model . . . . . . .     . . .   . . . .   .   .   .   .   10
 1.8    The VirtualRetina software logo . . . . . . . . . . .     . . .   . . . .   .   .   .   .   11
 1.9    An example about a gray scale input image . . . .         . . .   . . . .   .   .   .   .   11
 1.10   The structure of the .spk file . . . . . . . . . . . . .   . . .   . . . .   .   .   .   .   12
 1.11   the log scheme distribution of the ganglion cells . .     . . .   . . . .   .   .   .   .   12
 1.12   The beginning of the retina.xml file . . . . . . . . .     . . .   . . . .   .   .   .   .   13
 1.13   The Signal at each layer of the retina model . . . .      . . .   . . . .   .   .   .   .   14
 1.14   Spike train for different cell types . . . . . . . . . .   . . .   . . . .   .   .   .   .   15
 1.15   The spike train in a large scale simulation . . . . .     . . .   . . . .   .   .   .   .   16

 2.1    A typical spike train (produced with VirtualRetina) . . . . . . . . . . .                   18
 2.2    The technical details of a raster plot . . . . . . . . . . . . . . . . . . .                18
 2.3    The Enas Logo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                 21
 2.4    The correlation between non-correlated data . . . . . . . . . . . . . . .                   23
 2.5    Evolving of the correlation for different probability distributions (Bernouilli
        distribution) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .             24
 2.6    Evolving of the correlation for different time coincidence. The delay is
        the term we use to express that the two neurons fire between a τ time
        interval. We studied the correlation for different daly values, going from
        1 to 6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .             24
 2.7    The MEA chip for ganglion recordings . . . . . . . . . . . . . . . . . .                    25
 2.8    The evoked potential recording . . . . . . . . . . . . . . . . . . . . . .                  25
 2.9    Correlation between ganglion cells (at rest) in real acquisition . . . . .                  26
 2.10   Correlation between ganglion cells (evoked potential) in real acquisition                   27
 2.11   Correlation between X-ON ganglion cells in VirtualRetina . . . . . . .                      28
 2.12   Correlation between Y-ON ganglion cells in VirtualRetina . . . . . . .                      28
 2.13   Correlation between X-OFF ganglion cells in VirtualRetina . . . . . . .                     29
 2.14   Correlation between Y-OFF ganglion cells in VirtualRetina . . . . . . .                     29

                                            2
3.1   The MEA chip for ganglion recordings . . . . . . . . . . . . . . . . . .     32
3.2   The λij in term of distance between the retinal ganglion cells . . . . . .   33




                                        3
Motivation
The virtual Retina, recently developed in the Odyss´e Team at INRIA Sophia-Antipolis
                                                      e
allows large scale simulation of the retina with a reasonable computational cost. This
software simulates the Vertebrate Retina by implementing the steps of light-to-spikes
transformations. The light is the input, a spike train is the output. The output depends
on the retina parameters. The actual implementation doesn’t take into account the
correlation between the ganglion cells at the ouput level.
Beginning with the understanding of the VirtualRetina software and the statistical
framework that helps to study the ganglion cell, the aim of this work is to introduce an
important retina characteristic in the simulator to improve the biological plausibility.
The motivation behind is twofold:

  1. To introduce a tool for visual neuroscientists that allows large scale input for
     cortical simulation.

  2. The bio inspired image compression. Due to our collaboration with the I3S who
     are working on image compression and reconstruction from spiking data, the
     parallel work aims to see what is the effect of introducing connections between
     ganglion cells on the image compression efficiency.

Previous works have shown that there are connections between the retina cells at dif-
ferent levels. At different levels in the VirtualRetina, the connections between cells are
taken into account, but not at the level of the ganglion cells. Theoretically, this relation
between cells (at previous levels) doesn’t propagate within the different layers of Virtu-
alRetina there is no correlation between the ganglion cells. This hypothesis is verified
by statistics that are presented in the second chapter. This work demands multidisci-
plinary knowledge. Thanks to the team and the collaboration and people from different
domain are working to accomplish the task. The work is divided into two part. The
first part, which is described in this preliminary report, is to study of previous work
about VirtualRetina and then performing statistics from the VirtualRetina output to
verify if that is no correlation between ganglion cells in Virtual Retina.




                                             4
Chapter 1

The Virtual Retina

1.1      Introduction
The goal of this chapter is to introduce the VirtualRetina software. This chapter is
divided it into three parts:

  1. In the first part we briefly present vertebrate retina.

  2. The second part explains the model which is behind the VirtualRetina Software.

  3. The third part explains the software itself.

   The main reference of this chapter is the thesis of Adrien Wohrer [1].


1.2      The Vertebrate Retina
The vertebrate retina (situated at the back of the eye, Figure 1.1 ) is the interface
between the Incoming light and the neural pathways. Its main role is the transduction
of light rays into electrical currents. These electrical currents are the action potentials
that go to the cortex. The right eye current go to the left cortex and vice versa (Figure
1.2). The light transduction is processed by consecutive complex phenomena that
happen at different layer in the retina.
    The complexity of the retinal structure makes it a wide world to be studied by
scientists. To traduce light into action potentials, several procedure take place through
different stages, going from the photo transducers to the ganglion cells. Note that we
use the term spike instead of action potential. In general, a spike denotes that the
neuron fires, which is equivalent to the fact that this neuron achieves the threshold of
the action potential and fires a potential at its output (Figure 1.3).




                                            5
Figure 1.1: At left, the eye from the cornea to the optic nerve. At the right, a magni-
fication shows the retina with some details




         Figure 1.2: The visual pathways: From the optics nerves to cortex


                                          6
Figure 1.3: Up: the action potential. Down: The corresponding spike train


   Back to the retina structure. The following layers define the retina:

   • Light receptors.

   • Horizontal cells.

   • Bipolar cells.

   • Amacrine Cells.

   • Ganglion cells

   These different types of cells are represented in the Figure 1.4. Each of these cell
types has a role in the ’external work image processing’. At last count, the retinal
network is composed of at least 50 clearly distinct cell types.




                                          7
Figure 1.4: The different cell types in the vertebrate retina

    The Figure 1.5 shows the trajectory of the light-then-spikes process through the
different layers. The light comes from outside, being optically processed by the eye
compartments (Pupil, lens, ...), it then goes through the neural layers and then hits the
extreme back of the retina where the photo receptors are. A chemical process happens
in the photo receptors to transform light into electrical current. There are two types
of receptors: the cones and rods. The rods are very sensitive to light and motion. The
cones get activated at high illumination level and they are sensitive to the colors and
shapes.




                                           8
Figure 1.5: The different layers in the retina




                     Figure 1.6: The different layers in the retina

   The electrical current produced by these receptors goes then to the OPL (Outer
Plexiform Layer) where there are two kinds of cells: the horizontal and bipolar cells. As
the Figure 1.6 shows, there are direct connections between the consecutive components

                                           9
as well as between the non consecutive components. The light receptors excite the
horizontal and bipolar cells. The bipolar and ganglion cells excite the amacrine cells.
There are also feedback actions: the horizontal cells give inhibitory feedback to the
bipolar cells. The amacrine cells give also inhibitory feedback to the ganglion and
bipolar cells.


1.3      The underlying retina model
This part explains the virtual retina model. The creator of the VirtualRetina software
has chosen to use the contrast gain control model to implement the retinal function-
ality because he wanted to allow large scale simulation keeping in mind the biological
plausibility of the underlying model. The term ’contrast gain control’ refers to the
ability of the retinal system to control the transfer function of the contrast information.
The Figure 1.7 represents a retina model. From incoming light to spike generation,
the process is represented by a processed image and the corresponding mathematical
equation.
    The incoming light is represented by L(x, y, t), the illumination quantity at each
pixel of the image, at the time t. This light, being convoluted in time and space
at the receptor layer, gives a positive action for the bipolar cells. On the opposite,
the horizontal cells do another time-space convolution for this signal and acts with
a negative feedback. From both actions, comes the notation difference of Gaussian
(DOG); the output of the OPL for a Light input. The DOF is hence the input of the
IPL; the loge of the contrast gain control process. Our main interest is the ganglion
layer which receive an input current Igang (x,y,t). This current goes to the ganglion cells
which fire spikes when the threshold is reached. The ganglion cells axons, altogether,
are the optic nerve that goes to the cortex. It’s the output from the ganglion cells what
neuroscientists need to do simulation and research in the cortex.




                                            10
Figure 1.7: The different stages of the retina model




                        11
1.4     The VirtualRetina software
This software (Figure 1.8 represents its logo) has been developed by Adrien Wohrer, a
former PhD student at INRIA and it exists in two releases:

   • February 2009 - May 2010




                      Figure 1.8: The VirtualRetina software logo

   The main aim of this software is to produce spike trains knowing an image sequence
and a retina configuration. An image sequence is a set of successive images appearing
with a fixed speed (24 frame/sec.). The Retina configuration is the set of model pa-
rameters. This configuration could be done thanks to an xml file. To summarize, the
input files are:

   • An xml file (the retina configuration).

   • An image sequence (The Figure 1.9 is one of the images in the input sequence).




                Figure 1.9: An example about a gray scale input image


    The software shows how the image appears at the different layers and, at the end, it
gives 4 output files summarizing the simulation, the spike train, the neuron positions.
The spike train file, with a .spk extension, contains two columns. The first column
contains the unit of the ganglion cell and the second contains the corresponding time
of firing.
    As the distribution of the ganglion cells is already configured in the input retina.xml
file, the output retina.xml file contains the data of the input.xml file and the metadata

                                           12
Figure 1.10: The structure of the .spk file




Figure 1.11: The log polar scheme distribution of the ganglion cells in VirtualRetina.
The distribution is mainly concentrated at the center of the retina (fovea loge). The
concentration of this distribution follows an exponential decay along the distance from
the fovea. The background color shows the pixels of the image and the cell(s) for each
pixel. The VirtualRetina allows the simulation through another scheme, the rectangular
one, where the distribution of the ganglion cells in uniform overall the retina. We
can herein simulate a rectangular scheme with 1 cell/pixel density where each pixel
corresponds to one ganglion cell.




                                          13
concerning the units configuration; the identifier (unit) of each cell and it’s spatial
position in degree (Figure 1.11). The degree is a space unit measure in VirtualRetina
and it refers to how much the cell is far from the center, in degree.
   The input retina.xml file looks like the Figure 1.12




Figure 1.12: The xml file is the ’written virtual retina’. It carries all the properties of
the different layers in the model.




1.5       Simulation via VirtualRetina
In this section we show some of the results that VirtualRetina produces. The Figure
1.13 shows the signal at the different layers of the retina model. This is the first output
of the software, showing in real time the variation of the signal following the variation
of the input images overall the sequence.
    Mainly, we used retina configurations for X, Y, ON and OFF cells 1 for the ganglion
cells layer. The input sequence is the default one (Figure 1.9). To change the configu-
ration of the ganglion layer it’s sufficient to change 4 parameters in the retina.xml file.
   1
     The X-cells are A specific retinal ganglion cells (neurons) involved in visual information processing
(Troy and Shou, 2002; Hughes, 1979). These cells differ from related Y cells (called also alpha-cells)
and W cells (called also gamma-cells) by their morphology, response properties, and their projection
into cell layers of the lateral geniculate nucleus of the thalamus that transmits information to the visual
cortex (Lennie, 1980; Bowling and Michael, 1984; Sur et al, 1987; Tamamaki et al, 1995; Stanford et
al, 1983; Boykott and Wassle, 1991; Wassle and Boycott, 1991) and represent a class of horizontal cells.
The ON cells have the property to get activated in response to a positive stimilus. In the opposite,
the OFF cells get activated in response to and inhibition.


                                                    14
(a) Receptor Layer     (b) Horizontal Layer       (c) OPL Current




           (d) Bipolar Current   (e) Fast adaptation layer (f) Ganglion inputs cur-
                                                           rent

              Figure 1.13: The Signal at each layer of the retina model

We used 24 Frame/sec to simulate natural retina ability in caption images. Figure 1.14
shows several spike trains for different cells. The Difference between ON and OFF cells
is that the ON are sensitive to illumination while, on the opposite, the OFF cells are
sensitive to obscurity).




                                           15
(a) X-ON cells                      (b) Y-ON cells




                   (c) X-OFF cells                    (d) Y-OFF cells

Figure 1.14: Spike train for different cell types. The retina was configured with 16 cells
in the fovea and with a log polar distribution around. The input image is the default
sequence (Figure 1.9).

  The Figure 1.15 shows the spike trains for different colonies of cells but there are
more larger simulation, thousands of cells.


1.6     Conclusion
We saw in this chapter a fast view about the retina from biological and simulation point
of view. As it is supposed to offer, we can use it to produce spike trains for more than
thousands of ganglion cells. It also offers access to other layer of the simulator such
as the input current to the ganglion cells. The time that the simulator take with an
ordinary personal machine (Ubunto 9.10, 2 GB RAM, 1.66 GHZ Core2Duo Processor)
is somehow reasonable: seconds to simulate a retina with hundred of fovea cells and and
a 50 images as an input sequences. The simulation time arises to minutes for simulating
long time sequences,




                                          16
(a) XY-cat cells




                                    (b) Magno cells

Figure 1.15: The spike train for a large scale simulation; thousands of cells. The two
figures show the response for ganglion cells of two types (Up: XY cat cells; two layer
of cells. Down: Magno ON and OFF cells). We see the time that the cells take at the
beginning to achieve the asymptotic behavior. This response corresponds to a moving
bar stimulus.
                                          17
Chapter 2

Performing statistics with real and
VirtualRetina data

2.1     Introduction
In this chapter we explain the basics and tools with which we are performing statistics
of spike trains. We begin with an explanation about the theoretical basis. We then
introduce the EnaS library; the main tool we use to perform our statistics. Finally,
results from synthetic, real and VirtualRetina data are presented.


2.2     What is a spike train?
As previously presented, the spike train is a structure that contains boolean data (0 and
1) about a neuron activity. To create a spike train we need to know at which time the
neuron fires. A raster is the spike train of several neurons or a neural network (Figure
2.1). We tag the neuron as a ’unit’.
By definition, a spike train can be written as:
                                           N
                                  S(t) =         δ(t − ti )                         (2.1)
                                           i=1

Where ti is the list of times where the neuron i fires, N is the total number of units or
neurons and δ is the Dirac function.
    The Figure 2.1 shows a typical spike train for a network of 600 ganglion cells during
3 secondes (From VirtualRetina simulation).
    The length of the raster is the number of the time sample for this raster, N is the
number of neurons in the network.




                                           18
Figure 2.1: A typical spike train (produced with VirtualRetina)




Figure 2.2: The raster of 5 neurons is presented here with an explanation about its
length, size and a spike block of Range R




                                        19
2.3      Performing statistics
The data of a spike train are boolean, so that, if we want to perform statistics we will be
dealing with ’ones’ and ’zeros’ that represent the activity and non-activity of neurons.
Denote that i = 0, 1...N − 1; the neuron index. We consider that the activity of the
neuron i is represented by wi (t) where:
                                      1       if the neuron i fires
                          ωi (t) =                                                    (2.2)
                                      0       elsewhere
   We define here also another three terms:
  1. A spike pattern: represents the activity of the N neurons at a specific time t.
                                          ω(t) = [ωi (t)]N −1
                                                         i=0                          (2.3)
      A spike pattern is -for numerical purpose- encoded as follow:
                                                    N −1
                                          ω(t) =           2i ωi (t)                  (2.4)
                                                    i=0
      Where i is an integer.
  2. A spike block: represents the activity of N neurons between the time t1 and t2 ,
     in another way, it’s several consecutive spiking patterns.
                                           t1
                                          ωt2 = ω(t)t1 ≤t≤t2                          (2.5)
  3. A raster plot: the activity of the N neurons overall the time.
                                          ω(t) = [ωi (t)]0
                                                         i=−∞                         (2.6)
    We consider that the neuron fires in a time interval with a precision δ, i.e., between
t and t + δ. Hence, we consider that, typically, δ = 1ms, the smallest time unit in the
raster time scale. A simple characteristics about a spike train is the firing rate:
                                         ni (t)
                                ri (t) =        f ire/sec.                           (2.7)
                                           t
Where n the number of times the neuron i fired.


Observables
We call observable, the function that associate a real number to a raster plot. For
example, we can say that observing if the neuron i fires at the time t = 0 is an observable.
We can also observe if the neurons i and j fired at time t = 0 and we attribute the
following function to this event:
                                     φ(ω) = ωi (0)ωj (0)                              (2.8)
Idem, if the neuron j fired a τ time interval after the neuron i:
                                     φ(ω) = ωi (0)ωj (τ )                             (2.9)

                                               20
2.3.1        The statistical models
We want to characterize spike train statistics. Our main assumptions in this scope are
that the spike train is stationary and non deterministic. We also assume to study the
asymptotic behavior of this spike train.
The simplest statistical model is the Bernoulli distribution.
The probability that a block ωs corresponds to a known word 1 at is:
                                t
                                                                 s

                       µ(ωs = at ) = µ(ωi (k) = ai (k), i = 1...N, k = s...t)
                          t
                               s
                                     = ΠN Πt µ(ωi (k) = ai (k))
                                        i=1 k=s                                                 (2.10)
                                              n (t,s)
                                     =   ΠN ri i (1
                                          i=1           − ri )(t−s−ni (t−s))

µ refers to the probability that the events (between parenthesis) happen, ni and ri are
respectively the number of spike and firing rate of the neuron i in the time interval
[s, t] This probability (In the Bernoulli Distribution) corresponds to the firing rate of
the neuron i. The correlation between two neuron is then:

                             Ci,j = |µ(ωi (t)ωj (t)) − µ(ωi (t))µ(ωj (t))|                      (2.11)
Thus, Ci,j is equal to how many time the two neurons i and j fired together minus the
product of their firing rate.
More generally, given a spike train, we are looking to find the statistical model of the
data representing this spike train. For example, in a Bernoulli model, we search the
probability µ that corresponds to the firing rate ri (µ(ωi ) = ri ).
The Entropy measures how disorganized the system is. For a spike block w having a
probability µ we define the entropy as:

                                    h(µ) = −        µ( )logµ( )                                 (2.12)

Where (t) = N −1 2i ωi (t). (t) is the sum overall possible spike blocks. We take
                 i=0
into account some constraint when maximizing the entropy:

  1. The first constraint means to approach the experimental and measured probabil-
     ity: µ(ϕl ) = ϕexp , the experimental average of a funtion ϕl .
                    l

  2. The second constraint is a classic assumption in probability:                 w   µ(w) = 1.


Gibbs Potential
A Gibbs distribution is a probability that maximizes the statistical entropy under the
constraint that the experimental average ϕl the same prescribed functions ϕl is equal
                                           ¯
to the average with respect to µ.
  1
      We note by the term word, the structure that represents the neuron activity through a spike block


                                                   21
This distribution is such that the probability of a block     , µ( ), behaves like eψ( ) .
ψ( ) is the Gibbs Potential, defined by:
                                            L
                                      ψ=         λl φl                               (2.13)
                                           l=1

Where λl are the set of Lagrange multipliers. The equation 3.1 represent the statistical
model. Additionally, the entropy os such a potential obeys:

                                  P [ψ] = h[µ] + µ[ψ]                                (2.14)

Where µ is the probability distribution we are locking for (µ(φl ) = Cl ), and P is the
topological pressure.
   The process of Entropy Maximization is equivalent then to find the parameters λl .
For example, to see the correlation between two neurons, we suppose that the statistical
model is given by the following Gibbs Potential:

                      ψ(ω) = λ1 ω0 (0) + λ2 ω1 (0) + λ3 ω0 (0)ω1 (0)                 (2.15)

In this equation, we have three monomials, each of then is multiplied by a coefficient
λi . The developed EnaS library allow to find these parameters given a spike train and
by the mean of the Entropy Maximization. To find out the correlation, we read and
interpret the coefficient λ3 which means that if this coefficient is important in the model
then, the number of time the neurons 0 and 1 fired together is high, which means that
they are strongly correlated.


2.4     The EnaS Library
EnaS (Figure 2.3), the Event neural assembly Simulation is a dedicated C++ library.
It has been developed in collaboration between the NeuroMathComp and Cortex team
within the INRIA Sophia-Antipolis. This library is dedicated for neural simulation and
doing statistics for real or synthetic data.




                              Figure 2.3: The Enas Logo


                                           22
The code source is available on the EnaS website (http://enas.gforge.inria.
fr/) with some tutorial documentations and examples about the uses if its classes.
The source doesn’t need to be installed or configured. It’s sufficient to put it in the
same work director and include ‘‘EnaS.h’’; and include namespace enas;.
The compilation of the program that holds ‘‘EnaS.h’’; is a bit different because the
library uses another external libraries such that gsl, gpl and glpk which have to be
installed and configured before using this library, and hence, the compilation command
has to be the following:

 g++ -Wall -lgsl -lgslcblas -lglpk -lm MyFile.cpp -o MyOutputFile.o

   The main classes we used from this library are the following:
   • FileTimeSequence: to load a ’time-unit’or a .spk file containing the boolean data
     of a spike train.
   • PrefixTree: That defines a tree from a Gibbs potential which allows to estimate
     some statistical parameters.
   • GibbsPotential: the class that allows the modeling of a set of data and the ex-
     traction of the model parameters such as the λ coefficients.
    This library is helpul to our project because it contains classes that can help us
to perform some statistics. For example, EnaS can estimate the parameters of the
statistical model of a spike train (The λs in the Eq. 2.15).


2.5     Performing statistics with the EnaS library and
        results
The main idea behind using EnaS with spike trains data is to estimate the parameters
λl between neurons. As a special case, we are interested in correlation. This correlation
depends on the statistical model that the spike train follows. In an Bernoulli model,
the correlation tends to zero along a raster.
    Data from Bernoulli distribution could be generated at different probability value.
We note that, in Bernoulli distribution, the probability of the two independent possible
events is complementary, i.e., if p is the probability that the event happens, so, the
probability that the event doesn’t happen is equal to 1 − p.
The purpose to measure the correlation between two neurons doesn’t mean that we
estimate this correlation and we see if its value is very low or not, but, the matter is
to see how does this correlation evolve in term of the Raster length. Actually, we can
show theoretically and experimentally that this correlation decreases with the Raster
length. The evolving function in a log scale is:
                                                 k
                                    Ci,j (t) =                                     (2.16)
                                                 (T )

                                           23
where k is a real number and T is the raster length.

2.5.1     Synthetic data
Figure 2.4 shows the property of the exponential decay of correlation in term of the
raster length. The data were generated randomly with a Bernoulli distribution and
p = 0.5. We can also show the same idea for different Bernoulli probability distribution
(From p = 0.1 to p = 0.9), Figure 2.5.
A fitting technique (Lavenberg-Marquadt) allowed us to fit the results with the Tka . If
we saw the results for the whole time scale, we can observe that they really follow the
                 0.6
line of equation √T (Where k = 0.6 and a = 0.5).




Figure 2.4: The correlation between two neurons whose raster plots follow a Bernouilli
distribution. The data are synthetic with a probability distribution p = 0.4, and the
plot is in the logarithmic scale. The evolving of the calculated correlation with empirical
algorithms decreases in a closed manner to the √k line.
                                                    (T )




                                            24
Figure 2.5: Evolving of the correlation for different probability distributions (Bernouilli
distribution)




Figure 2.6: Evolving of the correlation for different time coincidence. The delay is the
term we use to express that the two neurons fire between a τ time interval. We studied
the correlation for different daly values, going from 1 to 6.


                                           25
2.5.2     Real data
One of our collaborator (Kolo Bodgan, INSERM)supplied us with some real data ac-
quisition for ganglion cells activity. The signals were acquired with an “MEA Chip”
that contains 58 electrodes (Figure 3.1).




                  Figure 2.7: The MEA chip for ganglion recordings

   Data characteristics:
   • The total duration of the acquisition is 93 s.
   • 30 sec. of spontaneous activity followed by 63 sec. of evoked potential activity
     (Figure 2.8). The 63 sec. of evoked potential were done as 1 sec. of light stimulus
     followed by 5 sec. inter-interval whit non stimulus.




Figure 2.8: The evoked potential recording. A light stimulus is given in face to the
eye and, back, at the ganglion cells layer, the recording of electric potential takes place
with the MEA acquisition grid.

   Statistics about the correlation between two random neuron within the real acquired
data have shown that the correlation doesn’t tend to 0 when the raster length tend to

                                            26
∞. This fact emphasizes the hypothesis that the neurons in the vertebrate retina are
correlated, from where our motivation to add the correlation property to the ganglion
cells in VirtualRetina.




    We applied the Gibbs potential model on this data (Equation 2.15) in order to
estimate the correlation between several couples of neurons, in term of the raster length.




Figure 2.9: Correlation between ganglion cells in real acquisition. The figure shows
the correlation for three couples of neurons, at severals distances. The time scale
corresponds to the first 30 sec. of the acquisition 2.5.2; neurons at rest.


                                           27
Figure 2.10: Correlation between ganglion cells in real acquisition. The figure shows
the correlation for three couples of neurons, at severals distances. The time scale
corresponds to the last 60 sec. of the acquisition 2.5.2; in evoked potential

2.5.3     VirtualRetina data
Data with virtual retina could be generated through the installed software or the web-
service implementation. It’s hence prefered to use the software after installation because
it’s more controllable and all the parameter you put in the model are also accessible.
The below figures (2.11, 2.12, 2.13, 2.14) show that the correlation between different
couple of neurons at at several distances. For the 4 cell types, the correlation tends to
zero for the highe raster length.
These figures show the increasing of the correlation value in term of the raster length.
There are some perturbation at the extermities of the curves. The perturbation at the
beginning comes from the non asymptotic behavior of the spike train.




                                           28
Figure 2.11: Correlation between X-ON ganglion cells in VirtualRetina




Figure 2.12: Correlation between Y-ON ganglion cells in VirtualRetina



                                 29
Figure 2.13: Correlation between X-OFF ganglion cells in VirtualRetina




Figure 2.14: Correlation between Y-OFF ganglion cells in VirtualRetina



                                 30
2.6      Conclusion
We have shown in this chapter the statistical tools we are based on to perform statistics
with spike trains. We also explained the mathematical, numerical and algorithmic
frameworks that are behind. With these tools we have shown the statistics for real
acquisitions, simulated and synthetic data. We have shown the difference between the
between the Bernoulli model data, the VirtualRetina data and real acquisitions by
studying the correlation in term of the raster length in each of these cases.
                                                               a
     In synthetic data, the correlation increases as the line √T increases. In VirtualRetina
simulations, the correlation follows also this line. On the opposite, the correlation
between different couples of neurons in real acquisitions doesn’t verify this property,
which means that the ganglions cells in VirtualRetina are not correlated.
     However, there exist connections between cells in the VirtualRetina model but at
the lower layers, not at the ganglion cells layer. The results have shown also that the
connections between the retina cells at the lower layer doesn’t imply automatically that
the ganglion cells are correlated. We need also that the ganglion cells have connections
between themselves.
     In the scope of the second part of the project, we would like to add connections be-
tween the ganglion cells in VirtualRetina in order to enhance the biological plausibility,
i.e., we will translate the values of the statistical model parameters (for real ganglion
cells) in connections and add them to the ganglion cells in the VirtualRetina.




                                            31
Chapter 3

Infering connectivity between
Retinal Ganglion Cells

3.1     introduction
In the first section of this chapter we will show the results that give an idea about the
connectivity between retinal ganglion cells from real data acquisition. The second sec-
tion will be a bibliographical study. This chapter is in fact complementary to the second
one; both explain about how to know about connectivity between retinal ganglion cells
but in two different ways: previousely we measured the evolving of the correlation in
term of raster length, and now, we measure quatitatively the connectivity using the
parameters of Gibbs Models.


3.2     The connectivity between ganglion cells from
        real data acquisition
Thanks to some acquired data by one of our collaborator (Kolomiets Bodgan, Institue
de vision de Paris), we could use the power of EnaS library to make some statistics
about connectivity. For technical details, we can take an idea about the connectivity
between two neurons from the λl in the Gibbs Potential equation:
                                            L
                                      ψ=         λl φl                              (3.1)
                                           l=1

    This equation is described previousely in the second chapter (Section 2.3.1).
In fact, the bigger the λl , the more the occurence of the observable in the train spike.
By consequence, if we measure different λl for several couples of neuron, we will have
an idea about their connectivity.
    Recalling the acquisition electrode MEA (fig. 3.1):
    In the following we will:

                                           32
Figure 3.1: The MEA chip for ganglion recordings

1. Take a collection of six neurons.

2. Apply the Gibbs model.

3. Use Enas to estimate the parameters λl .




                                       33
Figure 3.2: The 6 graphes show the value of λ0j in term of distance for different net-
work arrangement ( Vertical (Ex: The cells 12,13,14,15,16,17) ,Horizontal , Random).
The idea here is to see how does the distance affect the connectivity factor.The x-axis
represents the distance between the first cell (called Cell 0) and the j-th cell (Called
Cell j). The y-axis represents the connectivity factor λij between the cell 0 and the cell
j (j=1,2,...5). In all the graphs we can observe that: A part or the whole graph behaves
like a increasing at the beginning then decreasing after some distance (commonly here
between 400 and 1000 µm).




                                           34
Bibliography

[1] Adrien Wohrer, Model and large-scale simulator of a biological retina, with contrast
    gain control. University of Nice Sophia-Antipolis, INRIA, 2009.

[2] J.C. Vasquez1, T. Viville, B. Cessac, Entropy-based parametric estimation of spike
    train statistics. INRIA, 2009

[3] S. Coccoa, S. Leiblerb and R. Monassond Neuronal couplings between retinal gan-
    glion cells inferred by efficient inverse statistical physics methods PNAS, 2009

[4] C. Shalizi, K. Shalizi Blind Construction of Optimal Nonlinear Recursive Predictors
    for Discrete Sequences CoRR, 2004

[5] Authors: R. Haslinger, K. Klinkner, C. Shalizi The Computational Structure of
    Spike Trains Neural Computation, vol. 22 (2010), pp. 121–157




                                          35

Contenu connexe

Tendances

A buffer overflow study attacks and defenses (2002)
A buffer overflow study   attacks and defenses (2002)A buffer overflow study   attacks and defenses (2002)
A buffer overflow study attacks and defenses (2002)Aiim Charinthip
 
Real-Time Non-Photorealistic Shadow Rendering
Real-Time Non-Photorealistic Shadow RenderingReal-Time Non-Photorealistic Shadow Rendering
Real-Time Non-Photorealistic Shadow RenderingTamás Martinec
 
Micazxpl - Intelligent Sensors Network project report
Micazxpl - Intelligent Sensors Network project reportMicazxpl - Intelligent Sensors Network project report
Micazxpl - Intelligent Sensors Network project reportAnkit Singh
 
Badripatro dissertation 09307903
Badripatro dissertation 09307903Badripatro dissertation 09307903
Badripatro dissertation 09307903patrobadri
 
Symbol from NEM Whitepaper 0.9.6.3
Symbol from NEM Whitepaper 0.9.6.3Symbol from NEM Whitepaper 0.9.6.3
Symbol from NEM Whitepaper 0.9.6.3Alexander Schefer
 
Cuda toolkit reference manual
Cuda toolkit reference manualCuda toolkit reference manual
Cuda toolkit reference manualPiyush Mittal
 
Lecture notes on hybrid systems
Lecture notes on hybrid systemsLecture notes on hybrid systems
Lecture notes on hybrid systemsAOERA
 
4.daftar gambar
4.daftar gambar4.daftar gambar
4.daftar gambarMamoit
 
Metatron Technology Consulting 's MySQL to PostgreSQL ...
Metatron Technology Consulting 's MySQL to PostgreSQL ...Metatron Technology Consulting 's MySQL to PostgreSQL ...
Metatron Technology Consulting 's MySQL to PostgreSQL ...webhostingguy
 
Ross_Cannon_4th_Year_Project_Mastercopy
Ross_Cannon_4th_Year_Project_MastercopyRoss_Cannon_4th_Year_Project_Mastercopy
Ross_Cannon_4th_Year_Project_MastercopyRoss Cannon
 
Six Myths and Paradoxes of Garbage Collection
Six Myths and Paradoxes of Garbage Collection Six Myths and Paradoxes of Garbage Collection
Six Myths and Paradoxes of Garbage Collection Holly Cummins
 
Lesson in electric circuits v3
Lesson in electric circuits v3Lesson in electric circuits v3
Lesson in electric circuits v3ayalewtfr
 

Tendances (20)

A buffer overflow study attacks and defenses (2002)
A buffer overflow study   attacks and defenses (2002)A buffer overflow study   attacks and defenses (2002)
A buffer overflow study attacks and defenses (2002)
 
Real-Time Non-Photorealistic Shadow Rendering
Real-Time Non-Photorealistic Shadow RenderingReal-Time Non-Photorealistic Shadow Rendering
Real-Time Non-Photorealistic Shadow Rendering
 
Micazxpl - Intelligent Sensors Network project report
Micazxpl - Intelligent Sensors Network project reportMicazxpl - Intelligent Sensors Network project report
Micazxpl - Intelligent Sensors Network project report
 
Jiu manual
Jiu   manualJiu   manual
Jiu manual
 
Gdbint
GdbintGdbint
Gdbint
 
Badripatro dissertation 09307903
Badripatro dissertation 09307903Badripatro dissertation 09307903
Badripatro dissertation 09307903
 
Symbol from NEM Whitepaper 0.9.6.3
Symbol from NEM Whitepaper 0.9.6.3Symbol from NEM Whitepaper 0.9.6.3
Symbol from NEM Whitepaper 0.9.6.3
 
Cuda toolkit reference manual
Cuda toolkit reference manualCuda toolkit reference manual
Cuda toolkit reference manual
 
Lecture notes on hybrid systems
Lecture notes on hybrid systemsLecture notes on hybrid systems
Lecture notes on hybrid systems
 
BA1_Breitenfellner_RC4
BA1_Breitenfellner_RC4BA1_Breitenfellner_RC4
BA1_Breitenfellner_RC4
 
Di11 1
Di11 1Di11 1
Di11 1
 
Calculus
CalculusCalculus
Calculus
 
4.daftar gambar
4.daftar gambar4.daftar gambar
4.daftar gambar
 
Doctrine Manual
Doctrine ManualDoctrine Manual
Doctrine Manual
 
Metatron Technology Consulting 's MySQL to PostgreSQL ...
Metatron Technology Consulting 's MySQL to PostgreSQL ...Metatron Technology Consulting 's MySQL to PostgreSQL ...
Metatron Technology Consulting 's MySQL to PostgreSQL ...
 
Pylons
PylonsPylons
Pylons
 
PhD-2013-Arnaud
PhD-2013-ArnaudPhD-2013-Arnaud
PhD-2013-Arnaud
 
Ross_Cannon_4th_Year_Project_Mastercopy
Ross_Cannon_4th_Year_Project_MastercopyRoss_Cannon_4th_Year_Project_Mastercopy
Ross_Cannon_4th_Year_Project_Mastercopy
 
Six Myths and Paradoxes of Garbage Collection
Six Myths and Paradoxes of Garbage Collection Six Myths and Paradoxes of Garbage Collection
Six Myths and Paradoxes of Garbage Collection
 
Lesson in electric circuits v3
Lesson in electric circuits v3Lesson in electric circuits v3
Lesson in electric circuits v3
 

En vedette

Alzheimers europe cm presentation m lynch oct 2014
Alzheimers europe cm presentation m lynch oct 2014Alzheimers europe cm presentation m lynch oct 2014
Alzheimers europe cm presentation m lynch oct 2014Irish Hospice Foundation
 
N Fla. couple arrested for faking docs claim home
N Fla. couple arrested for faking docs claim homeN Fla. couple arrested for faking docs claim home
N Fla. couple arrested for faking docs claim homeaboundinginsani85
 
22tcn18 79chuong1quydinhcoban-121223060009-phpapp02
22tcn18 79chuong1quydinhcoban-121223060009-phpapp0222tcn18 79chuong1quydinhcoban-121223060009-phpapp02
22tcn18 79chuong1quydinhcoban-121223060009-phpapp02Nguyễn Thuấn
 
Hipaa education
Hipaa educationHipaa education
Hipaa educationeklundc
 
MATADORES EN CHOTA
MATADORES EN CHOTAMATADORES EN CHOTA
MATADORES EN CHOTAadrixita
 
Nebosh Certificate 130315
Nebosh Certificate 130315Nebosh Certificate 130315
Nebosh Certificate 130315Naresh Kumar
 

En vedette (13)

Alzheimers europe cm presentation m lynch oct 2014
Alzheimers europe cm presentation m lynch oct 2014Alzheimers europe cm presentation m lynch oct 2014
Alzheimers europe cm presentation m lynch oct 2014
 
Trips
TripsTrips
Trips
 
Happy new year 2015
Happy new year 2015Happy new year 2015
Happy new year 2015
 
N Fla. couple arrested for faking docs claim home
N Fla. couple arrested for faking docs claim homeN Fla. couple arrested for faking docs claim home
N Fla. couple arrested for faking docs claim home
 
Courbe MTF
Courbe MTFCourbe MTF
Courbe MTF
 
research guide
research guideresearch guide
research guide
 
Joomla 3 lesson4
Joomla 3 lesson4Joomla 3 lesson4
Joomla 3 lesson4
 
22tcn18 79chuong1quydinhcoban-121223060009-phpapp02
22tcn18 79chuong1quydinhcoban-121223060009-phpapp0222tcn18 79chuong1quydinhcoban-121223060009-phpapp02
22tcn18 79chuong1quydinhcoban-121223060009-phpapp02
 
Hipaa education
Hipaa educationHipaa education
Hipaa education
 
Top ten
Top tenTop ten
Top ten
 
Secrets_of_Nations
Secrets_of_NationsSecrets_of_Nations
Secrets_of_Nations
 
MATADORES EN CHOTA
MATADORES EN CHOTAMATADORES EN CHOTA
MATADORES EN CHOTA
 
Nebosh Certificate 130315
Nebosh Certificate 130315Nebosh Certificate 130315
Nebosh Certificate 130315
 

Similaire à Toward a realistic retina simulator

3D Content for Dream-like VR
3D Content for Dream-like VR3D Content for Dream-like VR
3D Content for Dream-like VRRoland Bruggmann
 
Seismic Tomograhy for Concrete Investigation
Seismic Tomograhy for Concrete InvestigationSeismic Tomograhy for Concrete Investigation
Seismic Tomograhy for Concrete InvestigationAli Osman Öncel
 
continuous_time_signals_and_systems-2013-09-11-uvic.pdf
continuous_time_signals_and_systems-2013-09-11-uvic.pdfcontinuous_time_signals_and_systems-2013-09-11-uvic.pdf
continuous_time_signals_and_systems-2013-09-11-uvic.pdfZiaOul
 
A Matlab Implementation Of Nn
A Matlab Implementation Of NnA Matlab Implementation Of Nn
A Matlab Implementation Of NnESCOM
 
Trade-off between recognition an reconstruction: Application of Robotics Visi...
Trade-off between recognition an reconstruction: Application of Robotics Visi...Trade-off between recognition an reconstruction: Application of Robotics Visi...
Trade-off between recognition an reconstruction: Application of Robotics Visi...stainvai
 
Stochastic Processes and Simulations – A Machine Learning Perspective
Stochastic Processes and Simulations – A Machine Learning PerspectiveStochastic Processes and Simulations – A Machine Learning Perspective
Stochastic Processes and Simulations – A Machine Learning Perspectivee2wi67sy4816pahn
 
Im-ception - An exploration into facial PAD through the use of fine tuning de...
Im-ception - An exploration into facial PAD through the use of fine tuning de...Im-ception - An exploration into facial PAD through the use of fine tuning de...
Im-ception - An exploration into facial PAD through the use of fine tuning de...Cooper Wakefield
 
Project report on Eye tracking interpretation system
Project report on Eye tracking interpretation systemProject report on Eye tracking interpretation system
Project report on Eye tracking interpretation systemkurkute1994
 
Specification of the Linked Media Layer
Specification of the Linked Media LayerSpecification of the Linked Media Layer
Specification of the Linked Media LayerLinkedTV
 
Perl <b>5 Tutorial</b>, First Edition
Perl <b>5 Tutorial</b>, First EditionPerl <b>5 Tutorial</b>, First Edition
Perl <b>5 Tutorial</b>, First Editiontutorialsruby
 

Similaire à Toward a realistic retina simulator (20)

3D Content for Dream-like VR
3D Content for Dream-like VR3D Content for Dream-like VR
3D Content for Dream-like VR
 
Seismic Tomograhy for Concrete Investigation
Seismic Tomograhy for Concrete InvestigationSeismic Tomograhy for Concrete Investigation
Seismic Tomograhy for Concrete Investigation
 
feilner0201
feilner0201feilner0201
feilner0201
 
continuous_time_signals_and_systems-2013-09-11-uvic.pdf
continuous_time_signals_and_systems-2013-09-11-uvic.pdfcontinuous_time_signals_and_systems-2013-09-11-uvic.pdf
continuous_time_signals_and_systems-2013-09-11-uvic.pdf
 
Winning the Game
Winning the GameWinning the Game
Winning the Game
 
A Matlab Implementation Of Nn
A Matlab Implementation Of NnA Matlab Implementation Of Nn
A Matlab Implementation Of Nn
 
EFSL
EFSLEFSL
EFSL
 
Trade-off between recognition an reconstruction: Application of Robotics Visi...
Trade-off between recognition an reconstruction: Application of Robotics Visi...Trade-off between recognition an reconstruction: Application of Robotics Visi...
Trade-off between recognition an reconstruction: Application of Robotics Visi...
 
Stochastic Processes and Simulations – A Machine Learning Perspective
Stochastic Processes and Simulations – A Machine Learning PerspectiveStochastic Processes and Simulations – A Machine Learning Perspective
Stochastic Processes and Simulations – A Machine Learning Perspective
 
PythonIntro
PythonIntroPythonIntro
PythonIntro
 
Im-ception - An exploration into facial PAD through the use of fine tuning de...
Im-ception - An exploration into facial PAD through the use of fine tuning de...Im-ception - An exploration into facial PAD through the use of fine tuning de...
Im-ception - An exploration into facial PAD through the use of fine tuning de...
 
Project report on Eye tracking interpretation system
Project report on Eye tracking interpretation systemProject report on Eye tracking interpretation system
Project report on Eye tracking interpretation system
 
Flask docs
Flask docsFlask docs
Flask docs
 
usersguide.pdf
usersguide.pdfusersguide.pdf
usersguide.pdf
 
Manual doctrine jog
Manual doctrine jogManual doctrine jog
Manual doctrine jog
 
Inglis PhD Thesis
Inglis PhD ThesisInglis PhD Thesis
Inglis PhD Thesis
 
phd-thesis
phd-thesisphd-thesis
phd-thesis
 
Specification of the Linked Media Layer
Specification of the Linked Media LayerSpecification of the Linked Media Layer
Specification of the Linked Media Layer
 
Perl <b>5 Tutorial</b>, First Edition
Perl <b>5 Tutorial</b>, First EditionPerl <b>5 Tutorial</b>, First Edition
Perl <b>5 Tutorial</b>, First Edition
 
perltut
perltutperltut
perltut
 

Plus de Hassan Nasser

Analysis of large scale spiking networks dynamics with spatio-temporal constr...
Analysis of large scale spiking networks dynamics with spatio-temporal constr...Analysis of large scale spiking networks dynamics with spatio-temporal constr...
Analysis of large scale spiking networks dynamics with spatio-temporal constr...Hassan Nasser
 
Poster Toward a realistic retinal simulator
Poster Toward a realistic retinal simulatorPoster Toward a realistic retinal simulator
Poster Toward a realistic retinal simulatorHassan Nasser
 
Large scale analysis for spiking data
Large scale analysis for spiking dataLarge scale analysis for spiking data
Large scale analysis for spiking dataHassan Nasser
 
Large scalespikingnetworkanalysis
Large scalespikingnetworkanalysisLarge scalespikingnetworkanalysis
Large scalespikingnetworkanalysisHassan Nasser
 
Presentation of local estimation of pressure wave velocity from dynamic MR im...
Presentation of local estimation of pressure wave velocity from dynamic MR im...Presentation of local estimation of pressure wave velocity from dynamic MR im...
Presentation of local estimation of pressure wave velocity from dynamic MR im...Hassan Nasser
 
Mesure locale de la vitesse de l’onde de pression par l’IRM dynamique.
Mesure locale de la vitesse de l’onde de pression par l’IRM dynamique.Mesure locale de la vitesse de l’onde de pression par l’IRM dynamique.
Mesure locale de la vitesse de l’onde de pression par l’IRM dynamique.Hassan Nasser
 

Plus de Hassan Nasser (6)

Analysis of large scale spiking networks dynamics with spatio-temporal constr...
Analysis of large scale spiking networks dynamics with spatio-temporal constr...Analysis of large scale spiking networks dynamics with spatio-temporal constr...
Analysis of large scale spiking networks dynamics with spatio-temporal constr...
 
Poster Toward a realistic retinal simulator
Poster Toward a realistic retinal simulatorPoster Toward a realistic retinal simulator
Poster Toward a realistic retinal simulator
 
Large scale analysis for spiking data
Large scale analysis for spiking dataLarge scale analysis for spiking data
Large scale analysis for spiking data
 
Large scalespikingnetworkanalysis
Large scalespikingnetworkanalysisLarge scalespikingnetworkanalysis
Large scalespikingnetworkanalysis
 
Presentation of local estimation of pressure wave velocity from dynamic MR im...
Presentation of local estimation of pressure wave velocity from dynamic MR im...Presentation of local estimation of pressure wave velocity from dynamic MR im...
Presentation of local estimation of pressure wave velocity from dynamic MR im...
 
Mesure locale de la vitesse de l’onde de pression par l’IRM dynamique.
Mesure locale de la vitesse de l’onde de pression par l’IRM dynamique.Mesure locale de la vitesse de l’onde de pression par l’IRM dynamique.
Mesure locale de la vitesse de l’onde de pression par l’IRM dynamique.
 

Toward a realistic retina simulator

  • 1. University of Nice Sophia-antipolis INRIA - I3S Midterm Report Parameter Tuning in virtual retina using synaptic plasticity Author: Hassan Nasser Supervisor: Dr. Bruno Cessac tutor: Dr. Thierry Vieville Dr. Marc Antonini Dr. Pierre Kornprobst August 13, 2010
  • 2. Contents 1 The Virtual Retina 4 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2 The Vertebrate Retina . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.3 The underlying retina model . . . . . . . . . . . . . . . . . . . . . . . . 9 1.4 The VirtualRetina software . . . . . . . . . . . . . . . . . . . . . . . . 11 1.5 Simulation via VirtualRetina . . . . . . . . . . . . . . . . . . . . . . . . 13 1.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2 Performing statistics with real and VirtualRetina data 17 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.2 What is a spike train? . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.3 Performing statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.3.1 The statistical models . . . . . . . . . . . . . . . . . . . . . . . 20 2.4 The EnaS Library . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.5 Performing statistics with the EnaS library and results . . . . . . . . . 22 2.5.1 Synthetic data . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.5.2 Real data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.5.3 VirtualRetina data . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 3 Infering connectivity between Retinal Ganglion Cells 31 3.1 introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.2 The connectivity between ganglion cells from real data acquisition . . . 31 1
  • 3. List of Figures 1.1 The vertebrate Eye . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.2 The visual pathways . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.3 Up: the action potential. Down: The corresponding spike train . . . . . 6 1.4 The different cell types in the vertebrate retina . . . . . . . . . . . . . 7 1.5 The different layers in the retina . . . . . . . . . . . . . . . . . . . . . . 8 1.6 The different layers in the retina . . . . . . . . . . . . . . . . . . . . . . 8 1.7 The different stages of the retina model . . . . . . . . . . . . . . . . . . 10 1.8 The VirtualRetina software logo . . . . . . . . . . . . . . . . . . . . . . 11 1.9 An example about a gray scale input image . . . . . . . . . . . . . . . 11 1.10 The structure of the .spk file . . . . . . . . . . . . . . . . . . . . . . . . 12 1.11 the log scheme distribution of the ganglion cells . . . . . . . . . . . . . 12 1.12 The beginning of the retina.xml file . . . . . . . . . . . . . . . . . . . . 13 1.13 The Signal at each layer of the retina model . . . . . . . . . . . . . . . 14 1.14 Spike train for different cell types . . . . . . . . . . . . . . . . . . . . . 15 1.15 The spike train in a large scale simulation . . . . . . . . . . . . . . . . 16 2.1 A typical spike train (produced with VirtualRetina) . . . . . . . . . . . 18 2.2 The technical details of a raster plot . . . . . . . . . . . . . . . . . . . 18 2.3 The Enas Logo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.4 The correlation between non-correlated data . . . . . . . . . . . . . . . 23 2.5 Evolving of the correlation for different probability distributions (Bernouilli distribution) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.6 Evolving of the correlation for different time coincidence. The delay is the term we use to express that the two neurons fire between a τ time interval. We studied the correlation for different daly values, going from 1 to 6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.7 The MEA chip for ganglion recordings . . . . . . . . . . . . . . . . . . 25 2.8 The evoked potential recording . . . . . . . . . . . . . . . . . . . . . . 25 2.9 Correlation between ganglion cells (at rest) in real acquisition . . . . . 26 2.10 Correlation between ganglion cells (evoked potential) in real acquisition 27 2.11 Correlation between X-ON ganglion cells in VirtualRetina . . . . . . . 28 2.12 Correlation between Y-ON ganglion cells in VirtualRetina . . . . . . . 28 2.13 Correlation between X-OFF ganglion cells in VirtualRetina . . . . . . . 29 2.14 Correlation between Y-OFF ganglion cells in VirtualRetina . . . . . . . 29 2
  • 4. 3.1 The MEA chip for ganglion recordings . . . . . . . . . . . . . . . . . . 32 3.2 The λij in term of distance between the retinal ganglion cells . . . . . . 33 3
  • 5. Motivation The virtual Retina, recently developed in the Odyss´e Team at INRIA Sophia-Antipolis e allows large scale simulation of the retina with a reasonable computational cost. This software simulates the Vertebrate Retina by implementing the steps of light-to-spikes transformations. The light is the input, a spike train is the output. The output depends on the retina parameters. The actual implementation doesn’t take into account the correlation between the ganglion cells at the ouput level. Beginning with the understanding of the VirtualRetina software and the statistical framework that helps to study the ganglion cell, the aim of this work is to introduce an important retina characteristic in the simulator to improve the biological plausibility. The motivation behind is twofold: 1. To introduce a tool for visual neuroscientists that allows large scale input for cortical simulation. 2. The bio inspired image compression. Due to our collaboration with the I3S who are working on image compression and reconstruction from spiking data, the parallel work aims to see what is the effect of introducing connections between ganglion cells on the image compression efficiency. Previous works have shown that there are connections between the retina cells at dif- ferent levels. At different levels in the VirtualRetina, the connections between cells are taken into account, but not at the level of the ganglion cells. Theoretically, this relation between cells (at previous levels) doesn’t propagate within the different layers of Virtu- alRetina there is no correlation between the ganglion cells. This hypothesis is verified by statistics that are presented in the second chapter. This work demands multidisci- plinary knowledge. Thanks to the team and the collaboration and people from different domain are working to accomplish the task. The work is divided into two part. The first part, which is described in this preliminary report, is to study of previous work about VirtualRetina and then performing statistics from the VirtualRetina output to verify if that is no correlation between ganglion cells in Virtual Retina. 4
  • 6. Chapter 1 The Virtual Retina 1.1 Introduction The goal of this chapter is to introduce the VirtualRetina software. This chapter is divided it into three parts: 1. In the first part we briefly present vertebrate retina. 2. The second part explains the model which is behind the VirtualRetina Software. 3. The third part explains the software itself. The main reference of this chapter is the thesis of Adrien Wohrer [1]. 1.2 The Vertebrate Retina The vertebrate retina (situated at the back of the eye, Figure 1.1 ) is the interface between the Incoming light and the neural pathways. Its main role is the transduction of light rays into electrical currents. These electrical currents are the action potentials that go to the cortex. The right eye current go to the left cortex and vice versa (Figure 1.2). The light transduction is processed by consecutive complex phenomena that happen at different layer in the retina. The complexity of the retinal structure makes it a wide world to be studied by scientists. To traduce light into action potentials, several procedure take place through different stages, going from the photo transducers to the ganglion cells. Note that we use the term spike instead of action potential. In general, a spike denotes that the neuron fires, which is equivalent to the fact that this neuron achieves the threshold of the action potential and fires a potential at its output (Figure 1.3). 5
  • 7. Figure 1.1: At left, the eye from the cornea to the optic nerve. At the right, a magni- fication shows the retina with some details Figure 1.2: The visual pathways: From the optics nerves to cortex 6
  • 8. Figure 1.3: Up: the action potential. Down: The corresponding spike train Back to the retina structure. The following layers define the retina: • Light receptors. • Horizontal cells. • Bipolar cells. • Amacrine Cells. • Ganglion cells These different types of cells are represented in the Figure 1.4. Each of these cell types has a role in the ’external work image processing’. At last count, the retinal network is composed of at least 50 clearly distinct cell types. 7
  • 9. Figure 1.4: The different cell types in the vertebrate retina The Figure 1.5 shows the trajectory of the light-then-spikes process through the different layers. The light comes from outside, being optically processed by the eye compartments (Pupil, lens, ...), it then goes through the neural layers and then hits the extreme back of the retina where the photo receptors are. A chemical process happens in the photo receptors to transform light into electrical current. There are two types of receptors: the cones and rods. The rods are very sensitive to light and motion. The cones get activated at high illumination level and they are sensitive to the colors and shapes. 8
  • 10. Figure 1.5: The different layers in the retina Figure 1.6: The different layers in the retina The electrical current produced by these receptors goes then to the OPL (Outer Plexiform Layer) where there are two kinds of cells: the horizontal and bipolar cells. As the Figure 1.6 shows, there are direct connections between the consecutive components 9
  • 11. as well as between the non consecutive components. The light receptors excite the horizontal and bipolar cells. The bipolar and ganglion cells excite the amacrine cells. There are also feedback actions: the horizontal cells give inhibitory feedback to the bipolar cells. The amacrine cells give also inhibitory feedback to the ganglion and bipolar cells. 1.3 The underlying retina model This part explains the virtual retina model. The creator of the VirtualRetina software has chosen to use the contrast gain control model to implement the retinal function- ality because he wanted to allow large scale simulation keeping in mind the biological plausibility of the underlying model. The term ’contrast gain control’ refers to the ability of the retinal system to control the transfer function of the contrast information. The Figure 1.7 represents a retina model. From incoming light to spike generation, the process is represented by a processed image and the corresponding mathematical equation. The incoming light is represented by L(x, y, t), the illumination quantity at each pixel of the image, at the time t. This light, being convoluted in time and space at the receptor layer, gives a positive action for the bipolar cells. On the opposite, the horizontal cells do another time-space convolution for this signal and acts with a negative feedback. From both actions, comes the notation difference of Gaussian (DOG); the output of the OPL for a Light input. The DOF is hence the input of the IPL; the loge of the contrast gain control process. Our main interest is the ganglion layer which receive an input current Igang (x,y,t). This current goes to the ganglion cells which fire spikes when the threshold is reached. The ganglion cells axons, altogether, are the optic nerve that goes to the cortex. It’s the output from the ganglion cells what neuroscientists need to do simulation and research in the cortex. 10
  • 12. Figure 1.7: The different stages of the retina model 11
  • 13. 1.4 The VirtualRetina software This software (Figure 1.8 represents its logo) has been developed by Adrien Wohrer, a former PhD student at INRIA and it exists in two releases: • February 2009 - May 2010 Figure 1.8: The VirtualRetina software logo The main aim of this software is to produce spike trains knowing an image sequence and a retina configuration. An image sequence is a set of successive images appearing with a fixed speed (24 frame/sec.). The Retina configuration is the set of model pa- rameters. This configuration could be done thanks to an xml file. To summarize, the input files are: • An xml file (the retina configuration). • An image sequence (The Figure 1.9 is one of the images in the input sequence). Figure 1.9: An example about a gray scale input image The software shows how the image appears at the different layers and, at the end, it gives 4 output files summarizing the simulation, the spike train, the neuron positions. The spike train file, with a .spk extension, contains two columns. The first column contains the unit of the ganglion cell and the second contains the corresponding time of firing. As the distribution of the ganglion cells is already configured in the input retina.xml file, the output retina.xml file contains the data of the input.xml file and the metadata 12
  • 14. Figure 1.10: The structure of the .spk file Figure 1.11: The log polar scheme distribution of the ganglion cells in VirtualRetina. The distribution is mainly concentrated at the center of the retina (fovea loge). The concentration of this distribution follows an exponential decay along the distance from the fovea. The background color shows the pixels of the image and the cell(s) for each pixel. The VirtualRetina allows the simulation through another scheme, the rectangular one, where the distribution of the ganglion cells in uniform overall the retina. We can herein simulate a rectangular scheme with 1 cell/pixel density where each pixel corresponds to one ganglion cell. 13
  • 15. concerning the units configuration; the identifier (unit) of each cell and it’s spatial position in degree (Figure 1.11). The degree is a space unit measure in VirtualRetina and it refers to how much the cell is far from the center, in degree. The input retina.xml file looks like the Figure 1.12 Figure 1.12: The xml file is the ’written virtual retina’. It carries all the properties of the different layers in the model. 1.5 Simulation via VirtualRetina In this section we show some of the results that VirtualRetina produces. The Figure 1.13 shows the signal at the different layers of the retina model. This is the first output of the software, showing in real time the variation of the signal following the variation of the input images overall the sequence. Mainly, we used retina configurations for X, Y, ON and OFF cells 1 for the ganglion cells layer. The input sequence is the default one (Figure 1.9). To change the configu- ration of the ganglion layer it’s sufficient to change 4 parameters in the retina.xml file. 1 The X-cells are A specific retinal ganglion cells (neurons) involved in visual information processing (Troy and Shou, 2002; Hughes, 1979). These cells differ from related Y cells (called also alpha-cells) and W cells (called also gamma-cells) by their morphology, response properties, and their projection into cell layers of the lateral geniculate nucleus of the thalamus that transmits information to the visual cortex (Lennie, 1980; Bowling and Michael, 1984; Sur et al, 1987; Tamamaki et al, 1995; Stanford et al, 1983; Boykott and Wassle, 1991; Wassle and Boycott, 1991) and represent a class of horizontal cells. The ON cells have the property to get activated in response to a positive stimilus. In the opposite, the OFF cells get activated in response to and inhibition. 14
  • 16. (a) Receptor Layer (b) Horizontal Layer (c) OPL Current (d) Bipolar Current (e) Fast adaptation layer (f) Ganglion inputs cur- rent Figure 1.13: The Signal at each layer of the retina model We used 24 Frame/sec to simulate natural retina ability in caption images. Figure 1.14 shows several spike trains for different cells. The Difference between ON and OFF cells is that the ON are sensitive to illumination while, on the opposite, the OFF cells are sensitive to obscurity). 15
  • 17. (a) X-ON cells (b) Y-ON cells (c) X-OFF cells (d) Y-OFF cells Figure 1.14: Spike train for different cell types. The retina was configured with 16 cells in the fovea and with a log polar distribution around. The input image is the default sequence (Figure 1.9). The Figure 1.15 shows the spike trains for different colonies of cells but there are more larger simulation, thousands of cells. 1.6 Conclusion We saw in this chapter a fast view about the retina from biological and simulation point of view. As it is supposed to offer, we can use it to produce spike trains for more than thousands of ganglion cells. It also offers access to other layer of the simulator such as the input current to the ganglion cells. The time that the simulator take with an ordinary personal machine (Ubunto 9.10, 2 GB RAM, 1.66 GHZ Core2Duo Processor) is somehow reasonable: seconds to simulate a retina with hundred of fovea cells and and a 50 images as an input sequences. The simulation time arises to minutes for simulating long time sequences, 16
  • 18. (a) XY-cat cells (b) Magno cells Figure 1.15: The spike train for a large scale simulation; thousands of cells. The two figures show the response for ganglion cells of two types (Up: XY cat cells; two layer of cells. Down: Magno ON and OFF cells). We see the time that the cells take at the beginning to achieve the asymptotic behavior. This response corresponds to a moving bar stimulus. 17
  • 19. Chapter 2 Performing statistics with real and VirtualRetina data 2.1 Introduction In this chapter we explain the basics and tools with which we are performing statistics of spike trains. We begin with an explanation about the theoretical basis. We then introduce the EnaS library; the main tool we use to perform our statistics. Finally, results from synthetic, real and VirtualRetina data are presented. 2.2 What is a spike train? As previously presented, the spike train is a structure that contains boolean data (0 and 1) about a neuron activity. To create a spike train we need to know at which time the neuron fires. A raster is the spike train of several neurons or a neural network (Figure 2.1). We tag the neuron as a ’unit’. By definition, a spike train can be written as: N S(t) = δ(t − ti ) (2.1) i=1 Where ti is the list of times where the neuron i fires, N is the total number of units or neurons and δ is the Dirac function. The Figure 2.1 shows a typical spike train for a network of 600 ganglion cells during 3 secondes (From VirtualRetina simulation). The length of the raster is the number of the time sample for this raster, N is the number of neurons in the network. 18
  • 20. Figure 2.1: A typical spike train (produced with VirtualRetina) Figure 2.2: The raster of 5 neurons is presented here with an explanation about its length, size and a spike block of Range R 19
  • 21. 2.3 Performing statistics The data of a spike train are boolean, so that, if we want to perform statistics we will be dealing with ’ones’ and ’zeros’ that represent the activity and non-activity of neurons. Denote that i = 0, 1...N − 1; the neuron index. We consider that the activity of the neuron i is represented by wi (t) where: 1 if the neuron i fires ωi (t) = (2.2) 0 elsewhere We define here also another three terms: 1. A spike pattern: represents the activity of the N neurons at a specific time t. ω(t) = [ωi (t)]N −1 i=0 (2.3) A spike pattern is -for numerical purpose- encoded as follow: N −1 ω(t) = 2i ωi (t) (2.4) i=0 Where i is an integer. 2. A spike block: represents the activity of N neurons between the time t1 and t2 , in another way, it’s several consecutive spiking patterns. t1 ωt2 = ω(t)t1 ≤t≤t2 (2.5) 3. A raster plot: the activity of the N neurons overall the time. ω(t) = [ωi (t)]0 i=−∞ (2.6) We consider that the neuron fires in a time interval with a precision δ, i.e., between t and t + δ. Hence, we consider that, typically, δ = 1ms, the smallest time unit in the raster time scale. A simple characteristics about a spike train is the firing rate: ni (t) ri (t) = f ire/sec. (2.7) t Where n the number of times the neuron i fired. Observables We call observable, the function that associate a real number to a raster plot. For example, we can say that observing if the neuron i fires at the time t = 0 is an observable. We can also observe if the neurons i and j fired at time t = 0 and we attribute the following function to this event: φ(ω) = ωi (0)ωj (0) (2.8) Idem, if the neuron j fired a τ time interval after the neuron i: φ(ω) = ωi (0)ωj (τ ) (2.9) 20
  • 22. 2.3.1 The statistical models We want to characterize spike train statistics. Our main assumptions in this scope are that the spike train is stationary and non deterministic. We also assume to study the asymptotic behavior of this spike train. The simplest statistical model is the Bernoulli distribution. The probability that a block ωs corresponds to a known word 1 at is: t s µ(ωs = at ) = µ(ωi (k) = ai (k), i = 1...N, k = s...t) t s = ΠN Πt µ(ωi (k) = ai (k)) i=1 k=s (2.10) n (t,s) = ΠN ri i (1 i=1 − ri )(t−s−ni (t−s)) µ refers to the probability that the events (between parenthesis) happen, ni and ri are respectively the number of spike and firing rate of the neuron i in the time interval [s, t] This probability (In the Bernoulli Distribution) corresponds to the firing rate of the neuron i. The correlation between two neuron is then: Ci,j = |µ(ωi (t)ωj (t)) − µ(ωi (t))µ(ωj (t))| (2.11) Thus, Ci,j is equal to how many time the two neurons i and j fired together minus the product of their firing rate. More generally, given a spike train, we are looking to find the statistical model of the data representing this spike train. For example, in a Bernoulli model, we search the probability µ that corresponds to the firing rate ri (µ(ωi ) = ri ). The Entropy measures how disorganized the system is. For a spike block w having a probability µ we define the entropy as: h(µ) = − µ( )logµ( ) (2.12) Where (t) = N −1 2i ωi (t). (t) is the sum overall possible spike blocks. We take i=0 into account some constraint when maximizing the entropy: 1. The first constraint means to approach the experimental and measured probabil- ity: µ(ϕl ) = ϕexp , the experimental average of a funtion ϕl . l 2. The second constraint is a classic assumption in probability: w µ(w) = 1. Gibbs Potential A Gibbs distribution is a probability that maximizes the statistical entropy under the constraint that the experimental average ϕl the same prescribed functions ϕl is equal ¯ to the average with respect to µ. 1 We note by the term word, the structure that represents the neuron activity through a spike block 21
  • 23. This distribution is such that the probability of a block , µ( ), behaves like eψ( ) . ψ( ) is the Gibbs Potential, defined by: L ψ= λl φl (2.13) l=1 Where λl are the set of Lagrange multipliers. The equation 3.1 represent the statistical model. Additionally, the entropy os such a potential obeys: P [ψ] = h[µ] + µ[ψ] (2.14) Where µ is the probability distribution we are locking for (µ(φl ) = Cl ), and P is the topological pressure. The process of Entropy Maximization is equivalent then to find the parameters λl . For example, to see the correlation between two neurons, we suppose that the statistical model is given by the following Gibbs Potential: ψ(ω) = λ1 ω0 (0) + λ2 ω1 (0) + λ3 ω0 (0)ω1 (0) (2.15) In this equation, we have three monomials, each of then is multiplied by a coefficient λi . The developed EnaS library allow to find these parameters given a spike train and by the mean of the Entropy Maximization. To find out the correlation, we read and interpret the coefficient λ3 which means that if this coefficient is important in the model then, the number of time the neurons 0 and 1 fired together is high, which means that they are strongly correlated. 2.4 The EnaS Library EnaS (Figure 2.3), the Event neural assembly Simulation is a dedicated C++ library. It has been developed in collaboration between the NeuroMathComp and Cortex team within the INRIA Sophia-Antipolis. This library is dedicated for neural simulation and doing statistics for real or synthetic data. Figure 2.3: The Enas Logo 22
  • 24. The code source is available on the EnaS website (http://enas.gforge.inria. fr/) with some tutorial documentations and examples about the uses if its classes. The source doesn’t need to be installed or configured. It’s sufficient to put it in the same work director and include ‘‘EnaS.h’’; and include namespace enas;. The compilation of the program that holds ‘‘EnaS.h’’; is a bit different because the library uses another external libraries such that gsl, gpl and glpk which have to be installed and configured before using this library, and hence, the compilation command has to be the following: g++ -Wall -lgsl -lgslcblas -lglpk -lm MyFile.cpp -o MyOutputFile.o The main classes we used from this library are the following: • FileTimeSequence: to load a ’time-unit’or a .spk file containing the boolean data of a spike train. • PrefixTree: That defines a tree from a Gibbs potential which allows to estimate some statistical parameters. • GibbsPotential: the class that allows the modeling of a set of data and the ex- traction of the model parameters such as the λ coefficients. This library is helpul to our project because it contains classes that can help us to perform some statistics. For example, EnaS can estimate the parameters of the statistical model of a spike train (The λs in the Eq. 2.15). 2.5 Performing statistics with the EnaS library and results The main idea behind using EnaS with spike trains data is to estimate the parameters λl between neurons. As a special case, we are interested in correlation. This correlation depends on the statistical model that the spike train follows. In an Bernoulli model, the correlation tends to zero along a raster. Data from Bernoulli distribution could be generated at different probability value. We note that, in Bernoulli distribution, the probability of the two independent possible events is complementary, i.e., if p is the probability that the event happens, so, the probability that the event doesn’t happen is equal to 1 − p. The purpose to measure the correlation between two neurons doesn’t mean that we estimate this correlation and we see if its value is very low or not, but, the matter is to see how does this correlation evolve in term of the Raster length. Actually, we can show theoretically and experimentally that this correlation decreases with the Raster length. The evolving function in a log scale is: k Ci,j (t) = (2.16) (T ) 23
  • 25. where k is a real number and T is the raster length. 2.5.1 Synthetic data Figure 2.4 shows the property of the exponential decay of correlation in term of the raster length. The data were generated randomly with a Bernoulli distribution and p = 0.5. We can also show the same idea for different Bernoulli probability distribution (From p = 0.1 to p = 0.9), Figure 2.5. A fitting technique (Lavenberg-Marquadt) allowed us to fit the results with the Tka . If we saw the results for the whole time scale, we can observe that they really follow the 0.6 line of equation √T (Where k = 0.6 and a = 0.5). Figure 2.4: The correlation between two neurons whose raster plots follow a Bernouilli distribution. The data are synthetic with a probability distribution p = 0.4, and the plot is in the logarithmic scale. The evolving of the calculated correlation with empirical algorithms decreases in a closed manner to the √k line. (T ) 24
  • 26. Figure 2.5: Evolving of the correlation for different probability distributions (Bernouilli distribution) Figure 2.6: Evolving of the correlation for different time coincidence. The delay is the term we use to express that the two neurons fire between a τ time interval. We studied the correlation for different daly values, going from 1 to 6. 25
  • 27. 2.5.2 Real data One of our collaborator (Kolo Bodgan, INSERM)supplied us with some real data ac- quisition for ganglion cells activity. The signals were acquired with an “MEA Chip” that contains 58 electrodes (Figure 3.1). Figure 2.7: The MEA chip for ganglion recordings Data characteristics: • The total duration of the acquisition is 93 s. • 30 sec. of spontaneous activity followed by 63 sec. of evoked potential activity (Figure 2.8). The 63 sec. of evoked potential were done as 1 sec. of light stimulus followed by 5 sec. inter-interval whit non stimulus. Figure 2.8: The evoked potential recording. A light stimulus is given in face to the eye and, back, at the ganglion cells layer, the recording of electric potential takes place with the MEA acquisition grid. Statistics about the correlation between two random neuron within the real acquired data have shown that the correlation doesn’t tend to 0 when the raster length tend to 26
  • 28. ∞. This fact emphasizes the hypothesis that the neurons in the vertebrate retina are correlated, from where our motivation to add the correlation property to the ganglion cells in VirtualRetina. We applied the Gibbs potential model on this data (Equation 2.15) in order to estimate the correlation between several couples of neurons, in term of the raster length. Figure 2.9: Correlation between ganglion cells in real acquisition. The figure shows the correlation for three couples of neurons, at severals distances. The time scale corresponds to the first 30 sec. of the acquisition 2.5.2; neurons at rest. 27
  • 29. Figure 2.10: Correlation between ganglion cells in real acquisition. The figure shows the correlation for three couples of neurons, at severals distances. The time scale corresponds to the last 60 sec. of the acquisition 2.5.2; in evoked potential 2.5.3 VirtualRetina data Data with virtual retina could be generated through the installed software or the web- service implementation. It’s hence prefered to use the software after installation because it’s more controllable and all the parameter you put in the model are also accessible. The below figures (2.11, 2.12, 2.13, 2.14) show that the correlation between different couple of neurons at at several distances. For the 4 cell types, the correlation tends to zero for the highe raster length. These figures show the increasing of the correlation value in term of the raster length. There are some perturbation at the extermities of the curves. The perturbation at the beginning comes from the non asymptotic behavior of the spike train. 28
  • 30. Figure 2.11: Correlation between X-ON ganglion cells in VirtualRetina Figure 2.12: Correlation between Y-ON ganglion cells in VirtualRetina 29
  • 31. Figure 2.13: Correlation between X-OFF ganglion cells in VirtualRetina Figure 2.14: Correlation between Y-OFF ganglion cells in VirtualRetina 30
  • 32. 2.6 Conclusion We have shown in this chapter the statistical tools we are based on to perform statistics with spike trains. We also explained the mathematical, numerical and algorithmic frameworks that are behind. With these tools we have shown the statistics for real acquisitions, simulated and synthetic data. We have shown the difference between the between the Bernoulli model data, the VirtualRetina data and real acquisitions by studying the correlation in term of the raster length in each of these cases. a In synthetic data, the correlation increases as the line √T increases. In VirtualRetina simulations, the correlation follows also this line. On the opposite, the correlation between different couples of neurons in real acquisitions doesn’t verify this property, which means that the ganglions cells in VirtualRetina are not correlated. However, there exist connections between cells in the VirtualRetina model but at the lower layers, not at the ganglion cells layer. The results have shown also that the connections between the retina cells at the lower layer doesn’t imply automatically that the ganglion cells are correlated. We need also that the ganglion cells have connections between themselves. In the scope of the second part of the project, we would like to add connections be- tween the ganglion cells in VirtualRetina in order to enhance the biological plausibility, i.e., we will translate the values of the statistical model parameters (for real ganglion cells) in connections and add them to the ganglion cells in the VirtualRetina. 31
  • 33. Chapter 3 Infering connectivity between Retinal Ganglion Cells 3.1 introduction In the first section of this chapter we will show the results that give an idea about the connectivity between retinal ganglion cells from real data acquisition. The second sec- tion will be a bibliographical study. This chapter is in fact complementary to the second one; both explain about how to know about connectivity between retinal ganglion cells but in two different ways: previousely we measured the evolving of the correlation in term of raster length, and now, we measure quatitatively the connectivity using the parameters of Gibbs Models. 3.2 The connectivity between ganglion cells from real data acquisition Thanks to some acquired data by one of our collaborator (Kolomiets Bodgan, Institue de vision de Paris), we could use the power of EnaS library to make some statistics about connectivity. For technical details, we can take an idea about the connectivity between two neurons from the λl in the Gibbs Potential equation: L ψ= λl φl (3.1) l=1 This equation is described previousely in the second chapter (Section 2.3.1). In fact, the bigger the λl , the more the occurence of the observable in the train spike. By consequence, if we measure different λl for several couples of neuron, we will have an idea about their connectivity. Recalling the acquisition electrode MEA (fig. 3.1): In the following we will: 32
  • 34. Figure 3.1: The MEA chip for ganglion recordings 1. Take a collection of six neurons. 2. Apply the Gibbs model. 3. Use Enas to estimate the parameters λl . 33
  • 35. Figure 3.2: The 6 graphes show the value of λ0j in term of distance for different net- work arrangement ( Vertical (Ex: The cells 12,13,14,15,16,17) ,Horizontal , Random). The idea here is to see how does the distance affect the connectivity factor.The x-axis represents the distance between the first cell (called Cell 0) and the j-th cell (Called Cell j). The y-axis represents the connectivity factor λij between the cell 0 and the cell j (j=1,2,...5). In all the graphs we can observe that: A part or the whole graph behaves like a increasing at the beginning then decreasing after some distance (commonly here between 400 and 1000 µm). 34
  • 36. Bibliography [1] Adrien Wohrer, Model and large-scale simulator of a biological retina, with contrast gain control. University of Nice Sophia-Antipolis, INRIA, 2009. [2] J.C. Vasquez1, T. Viville, B. Cessac, Entropy-based parametric estimation of spike train statistics. INRIA, 2009 [3] S. Coccoa, S. Leiblerb and R. Monassond Neuronal couplings between retinal gan- glion cells inferred by efficient inverse statistical physics methods PNAS, 2009 [4] C. Shalizi, K. Shalizi Blind Construction of Optimal Nonlinear Recursive Predictors for Discrete Sequences CoRR, 2004 [5] Authors: R. Haslinger, K. Klinkner, C. Shalizi The Computational Structure of Spike Trains Neural Computation, vol. 22 (2010), pp. 121–157 35