Definition {N(t), I < 0} is a simple Poisson process with intensity or mean rate X if: (a) 7V(0) = 0; (b) given any 0 = t2 < ■■■ < fa < („, the random variables N(lk) - NiJu-i), k = l,2,...,n, are mutually independent; and (c) for any 0 < t1 < t2, N(t2) - iV(^) is a Poisson random variable with probability distribution ?x{N(t1)-N(t1) = k} (\(r2-ř1))iexp(-X(ř2-í1)) k = 0,1,2,... . (9.26) Property (a) is just a starting condition. Property (b) puts the Poisson process in the class of processes with independent increments. Property (c) tells us that the increments are stationary (since only time differences matter) with Poisson distributions. The meaning of X will become clear shortly. From (9.26) with (, = 0 and t2 = t, we see that N(t) is a Poisson random variable with mean and variance equal to \l. Also, with /[ = / and (, = t + Ar, we find ?T{N{t + M)-N{l) = k}^ (XA/)*exp(-\Ai) k~\ /l-AAz + o(Ai), i AAí + o(Aí), k-0, k = l, k>2, (9.27) where o(Af) means terms that, as A/ -»0, approach zero faster than At itself. Hence, in very small time intervals, the process is most likely to stay unchanged (k = 0) or undergo a step increase of unity {k - 1). The value of N(t) will be the number of unit step changes that have occurred in (0, /]. A typical realization (samplepath, trajectory) of the process will appear as sketched in Figure 9.7A. In Figure 9.7B we have inserted a cross on the (-axis at the times when JV(/) jumps by unity. Each cross may be associated with the B XXX K III —-X—X- 1 1 Figure 9.7. A-Realization of a Poisson process. B-Associated realization of the Poisson point process. C- Hypothetical spike train. occurrence of a certain kind of event such as a postsynaptic potential or an action potential. The crosses (points) form a realization of a Poisson point process and N(t) thus records or counts the number of events in (0, /]. Figure 9.7C shows a hypothetical spike train, which can be associated with the point process. 77!e waiting time to an event Consider any s > 0 and let 7\ be the time to the first event occurring after s. Then we find that 7\ is exponentially distributed with mean 1/A. Proof. The probability that one has to wait longer than * for the first event is the probability that there are no events in (s, s + t}. Thus PrfTi > t) = Pr{N(s + t) - N(s) = 0} = e Thus the distribution function of Tl is 1 probability density function pi of Tl is p1(t)=\e Od, t>0. (9.28) and hence the (9.29) as required. Since S was completely arbitrary, it could have coincided with the time of an event. It may be shown, in fact, that the time interval between events is exponentially distributed with mean l/X. Since the average waiting time between events is l/X, there are, roughly Represents one measurement Figure 9.8. Histogram of time intervals between m.e.p.p.'s at the frog neuromuscular junction. [From Fatt and Kate (1952). Reproduced with the permission of The Physiological Society and the authors.] speaking, on average A events per unit time. Thus A is called the mean rate or intensity. Figure 9.8 shows the histogram, obtained by Fatt and Katz (1952), of time intervals between the spontaneous miniature endplate potentials at the frog neuromuscular junction. The histogram has the shape of an exponential distribution. Fatt and Katz were thus led to make their Poisson hypothesis, that the arrival times of the m.e.p.p.'s constituted a Poisson point process [see Van de Kloot, Kita, and Cohen (1975)]. The Poisson process as a primitive model for nerve-cell activity Let the depolarization of a nerve cell be {V(t), />0}. Suppose that excitatory inputs occur at random in accordance with events in a simple Poisson process {N(t), t>0} with mean rate A. Each excitatory input causes V to increase by aE. When V reaches or exceeds the constant threshold level 6 > 0, the cell emits an action potential. Then W(t)=aaN(i), V<$,V(0) = 0. (9.30) In this primitive nerve-cell model, what is the probability distribution of the time interval between action potentials? To answer this we first ask what is the waiting time Tk until the jfcth event in a simple Poisson process after the arbitrary time point s. We will show that Tk has a gamma density with parameters k and A. Proof. The feth event will occur in (s + t, s + t + At] if and only if there are k - 1 events in (s, s+ t] and one event in (s + t, s + t + At]. It follows that ?T{Tke(t,t + M]} ■o(At), k = l,2, Hence the density of Tk is ;>0, (9.31) (9.32) as required. Hence we find that the waiting time for the fcth event has a gamma density with parameters k and A. Thus Tk has mean fe/X and variance k/\2. Some gamma densities are illustrated in Figure 9.9. To return to the primitive nerve-cell model, an action potential is emitted when V reaches or exceeds 0, or, equivalently, when A' reaches or exceeds 8/aE. Letting [x] denote the largest integer less than x we find that 1 + [8/aE] excitatory inputs are required. Hence the time interval between action potentials has a gamma density with parameters 1 + [8/aE] and A. The mean time interval between action potentials is (1 + [0/aE])/\. The gamma densities, in fact, resemble the ISI histograms obtained for many nerve cells and have often been fitted to them [see, for example, Stein (1965)]. However, the model employed to derive the gamma densities incorporates no decay of membrane potential between excitatory inputs. Only when the cell has an exceedingly large Figure 9.9. Gamma densities for A — 1 and k = from Cox and Miller (1965).] 1, 2, and 4. [Adapted time constant and the rate of incoming excitation is very fast will this approximation be valid. Finally, we note that when 9/aE is large the gamma density becomes approximately that of a normal random variable. 9.5 Poisson excitation with Poisson inhibition We will here consider another primitive model for nerve-cell activity in which excitation and inhibition arrive according to independent Poisson processes. This gives what is commonly called a birth-and-death process, which in this case Feller calls a randomized random walk. Much of what follows is taken from Feller's treatment (1966). Again we ignore the decay of membrane potential between inputs. Despite the unphysiological nature of the model, it is useful because: (i) the model can be analyzed completely thereby providing a standard with which to compare other models and also real nerve cells; and (ii) in a limiting case we obtain a Wiener process (see the next section), which is useful in many situations. Let V(t), t > 0, be the depolarization at time t. We assume that the number of excitatory inputs in (0, t] is NE(t), where NE is a Poisson process with mean rate A£, and that the number of inhibitory inputs is N,(t), where N: is a Poisson process with mean rate X;. Each excitatory input makes V jump up by unity, whereas each inhibitory input causes V to jump down by unity. Thus V{t) = NE{t)-N,{t), V(Q) = 0,V 0} is a Poisson process with mean rate A. From the definition of conditional probability we find Pr(K jumps by +1 in (t, t + Af]| a jump in Koccurs in (t, t + At]) = AE/A = p, (9.35) Pr(K jumps by -1 in (t, t + At] \ a jump in Koccurs in (t,t + A J) = X,/A = 0. Let the process V be at m at t and suppose that « > m jumps have occurred, of which nt were +1 and n2 were -1. Then we must have n = ni + n2, (9.38A) m = «,-/j2, (9.38B) and hence «1 = (m + «)/2, (9.38C) n = m + 2n2. (9.38D) The probability that V(t) = m if « jumps have occurred in (0, t] is the probability that a binomial random variable with parameters n and p takes the value nv That is, Pr{K(f) = m|n jumps in (0, ?]} = ( "J,?"'^"1 (9.39) By the law of total probability, Pr{V(t) = m) = £ Pr{V(t) = m\n jumps in [0,t]} xPr{« jumps in (0, r]}. (9.40) Since n jumps in (0, t] has probability e x'(Ar)"/n!, we find 00 (\t)"1 " where the prime on the summation sign indicates that summation is over either even or odd n depending on whether m is even or odd, respectively. Utilizing (9.38) and the fact that n = m, m + 2, m + 4,... implies «1 =0,1,2,..., this becomes pm{t) = e^< E (Al)" m + 2/2 2 ,0 (m + 2n1)\ \ ™+ «2 In terms of the modified Bessel function, /p(x) = J0wr(* + p + i)[2 (9.42) (9.43) we get l\ \",/2 _ 9.5.1 Time of first passage to threshold We assume that there is a fixed threshold which when reached by V leads to the emission of an action potential. The threshold condition is an imposed one and after a spike the potential is artificially reset to zero, possibly after a dead time or refractory period. Passage to time-varying thresholds in this model does not seem to have been considered. We let 8 be a positive integer and seek the time of first passage of V to 8, which is identified with the interspike interval. To find the probability distribution of the first-passage time, we employ the method of images in the symmetric case (A£ = \,) and the renewal equation in the asymmetric case. (A) Symmetric case: method of images We will first find p*(t), the probability that the randomized random walk is at level m at t but has stayed below 8 up to time t. This is in distinction to pm(t), which includes passages below, to, and above 8. Consider Figure 9.10, where a randomized walk process U is shown starting from the image point 28 and having the value m at t. There is a one-to-one correspondence between such paths of U and 26 Figure 9.10. Paths considered in the method of images. those of V that start at 0, touch and/or cross the level 8 in (0,t), and end up at m at t. By symmetry the probability assigned to such paths is p2e-m(f)- These paths are excluded in computing p* so we have P£(t)-PmU)-pu-«{t)- (9-44) To obtain the probability density fs(t) of the time of first passage to level 8, note that a first passage to 6 occurs in (t, t + At] if V has stayed below 8 in (0, t\ is, in fact, at 6 — 1 at r, and a jump of +1 occurs in (r, t + At]. The probability of a jump of +1 in (t, t + Ar] is AB A? = (X/2) Ar. Putting these probabilities together gives ft(t)to=pf_l(t)(\/2)At. (9.45) Utilizing (9.43) and (9.44) and the fact that A.£ = A„ we find '[/9_1(2AE0-/(!+1(2A£/)], f>0. (9.46) A more succinct expression results on using the recurrence relation for the modified Bessel function, l,_l(x)-I^l{x) = {18/x)It{X) whereupon fe(t) = (»/t)e^%(2\Et) t>0. (9.47) (9.48) (B) General case: the renewal equation When the rates of arrival of jumps up and down are not equal, we resort to another method for obtaining ft{t). The idea on which the method is based is illustrated in Figure 9.11. A path is shown starting at zero and attaining the value m > 8 at t. Since m> 8, t t Figure 9.11. Paths that lead to the renewal equation. such a path must have at some time before r passed through the level S and, in particular, at some time t' < t done this for the first time. Integrating over all such paths gives, using a continuous version of the law of total probability, (9.49) Pm(') = ['f«(t')pm-eU-t')dt'. This integral equation is called a renewal equation, which we will solve using Laplace transform methods. Note that the probability of a transition from $ at f to m at t is the same as the probability of a transition from 0 at time zero to m - 6 at t - t' (spatial and temporal homogeneity). Equation (9.49) is a convolution of fe and From Table 3.2 we see that the Laplace transform of the convolution of two functions is the product of their Laplace transforms. Thus, denoting Laplace transforms by the extra subscript L, pmA^=f«As)prn-»As)' m (9-5°) where s is the transform variable. Rearranging this gives the following useful relation: f,,ds)=pmAs)/p»,-6As)- (9-51) The transforms on the right can be found as series. From (9.42) and Table 3.2, we find Xmpm m + 2n-m + n-, ■ In. (9.52) Utilizing the property i?{e"/(0} = ft(s - e), we find Xpq V ,m + 2n2 \ xy % n2-0 (S + Xf) \m + n1) (s+xy Xpq \"2[m-8 + 2n7 „^o\(s + XY! \m-B + n (9.53) It is left as an exercise to show that this is the Laplace transform of the required first-passage time density, IX \l/2 er<^+x<>' _ /.(0 = »|— I -"--/#(2f^7). t>0- (9-54) Moments of the firing time Let Te be the (random) time taken for V to reach S from the initial state zero (resting state). Then T0 has the probability density function fg. The «th moment of T9 is dt. (9.55) When n = 0 we obtain the total probability mass of Ts concentrated on (0, oo). That is, - Pr {Te Xr the mean and variance of the firing time can be found with the aid of the following relation (Gradshteyn and Ryzhik 1965, page 708): I. This yields %(fix) dx = , ;- —-; /az-/S2(a + /a2-^2) E\Te] \E-\, XE> X,, S(\H + \r) Var[r,]- „ , ,3,A£>A;. (9.58) (9.59) (9.60) The coefficient of variation, the standard deviation divided by the mean, is CV[Te] = 1/2 (9.61) l«(**-Af). which indicates that the coefficient of variation is inversely proportional to the square root of the threshold for fixed rates of excitation and inhibition. When XE = Xr, although Te < oo with probability one, the mean (and higher-order moments) of Te is infinite. Note that we have assumed that the excitatory and inhibitory jumps of V are of unit size. If instead the jumps have magnitude a, so that V{t)-a[NE(i)-N,{t% (9.62) then with a threshold 8 > 0, not necessarily an integer, the time to get to threshold will be the time for the process with unit jumps to reach [1 + e/a). Tails of the firing-time density Using the following asymptotic relation for the modified Bessel function at large arguments (Abramowitz and Stegun 1965, page 377), 2-1) ft (9.63) we deduce that when there is Poisson excitation and inhibition, ]ftr{XEX]) 1/2 ,3/2 XI (9.64) whereas when there is Poisson excitation only, we have the exact result M0-[*V(*-i)i]*'-V (9.65) Thus the density of the first-passage time to level S is quite different in its tails, depending on the presence or absence of inhibition. 9.6 The Wiener process We will soon proceed to more realistic models, which incorporate the decay of membrane potential between synaptic inputs. Before doing so, we consider an approximation to the process V defined in the previous section. The approximating process is a Wiener process (or Brownian motion), which belongs to the general class of Markov processes called diffusion processes. These general concepts will be explained later. Gerstein and Mandelbrot (1964) pioneered the use of the Wiener process in neural modeling. Diffusion processes have trajectories that are continuous functions of t, in distinction to the randomized random walk whose sample paths are discontinuous. The study of diffusion processes is often less difficult than that of their discontinuous counterparts chiefly because the equations describing their important properties are differential equations about which more is known than differential-difference equations, which arise for discontinuous processes. Among the reasons for studying the Wiener process as a model for nerve membrane potential are: (i) it is a thoroughly studied process and many of the relevant mathematical problems have been solved; and (ii) from the Wiener process we may construct many other more realistic models of nerve-cell activity. The Wiener process as a limiting case of a random walk Consider the process defined by ^(0 = 4^(0-^/(0]. (>0, (9.66) where a is a constant, and NE and Nr are independent Poisson processes with mean rates XE — \r = X. The process Va has jumps up and down of magnitude a. We note that E[Va(t)]=a[\E-\1}t = 0, (9.67) Var[K„(0] =a2[Vai[NE(t)] +Var[iV,(0]] = 2a2Xt. (9.68)