Poisson distribution 37
Manly (1984) and references therein; see also Cormack (1968) and the conference article of the same author (1973) who begins with the following remarks:
Many of the papers in this volume are concerned with the process of describing the development of an animal population by a mathematical model. The properties of such a model can then be derived, either by elegant mathematics or equally elegant computer simulation, in order to describe the future state of the population in terms of certain initial boundary conditions. The model becomes of scientific value when such predictions can be tested, which requires in turn that the mathematical symbols can be replaced by numbers. The parameters of the model must be estimated from data of a type that a biologist can collect about the population he is studying.
For an introductory treatment written for biologists, see Begon (1979).
3.3 THE POISSON DISTRIBUTION
We recall the definition and some elementary properties of Poisson random variables.
Definition A non-negative integer-valued random variable X has a Poisson distribution with parameter X > 0 if
pk = T>r{X = k} =
rxxk
kl
£ = 0,1,2,...
(3.7)
From the definition of ex as £ Xk/k\ we find
£ Pr {AT = fc} = l.
The mean and variance of X will easily be found to be
E(X) = Var(X) = L
The shape of the probability mass function depends on X as Table 3.1 and the graphs of Fig. 3.2 illustrate.
Table 3.1 Probability mass functions for some Poisson random variables
Po Pi Pa Pa P4 Ps Pe Pi Pa
.607 .303 .076 .013 .002 <.001
A = l .368 .368 .184 .061 .015 .003 <.001
A = 2 .135 .271 .271 .180 .090 .036 .012 .003 <.001
38 Applications of hypergeometric and Poisson distributions Pk
0.5
• X = 1/2
'.X = 1
'v X - 2
I I IVv. I 1 1 Nv-
01234 012345 01234567
k
Figure 3.2 Probability mass functions for Poisson random variables with various parameter values.
There are two points which emerge just from looking at Fig. 3.2:
(i) Poisson random variables with different parameters can have quite different looking mass functions.
(ii) When X gets large the mass function has the shape of a normal density (see Chapter 6).
Poisson random variables arise frequently in counting numbers of events. We will consider events which occur randomly in one-dimensional space or time and in two-dimensional space, the latter being of particular relevance in ecology. Generalizations to higher-dimensional spaces will also be briefly discussed.
3.4 HOMOGENEOUS POISSON POINT PROCESS IN ONE DIMENSION
Let t represent a time variable. Suppose an experiment begins at t = 0. Events of a particular kind occur randomly, the first being at Tu the second at T2, etc., where Tu T2, etc., are random variables. The values tt of Tb i = 1,2,... will be called points of occurrence or just events (see Fig. 3.3).
t = o t± t2 t3
Figure 3.3 Events of a particular kind occur at tlf t2, etc.
Homogeneous Poisson point process in one dimension 39
Let (slfs2"] be a subinterval of the interval [0,s] where s < oo. Denote by N(slfs2) the number of points of occurrence in (slss2]. Then N(s1,s2) is a random variable and the collection of all such random variables, abbreviated to N, for various subintervals (or actually any subsets of [0, s]) is called a point process on [0, s].
Definition N is an homogeneous Poisson point process with rate A if:
(i) for any 0 ^ st < s2 < s, N(su s2) is a Poisson random variable with parameter X{s2 — s1);
(ii) for any collection of times 0^so 0. Let Tx be the time which elapses before the first event after s. Then we have the following result.
Theorem 33 The waiting time, 7\, for an event is exponentially distributed with mean 1/X,
Note that the distribution of 7^ does not depend on s. We say the process has no memory, a fact which is traced to the definition since the numbers of events in (sl5s2] and (s2,s3] are independent.
Proof First we note that the probability of one event in any small interval of length At is
e-*"(Xbt) = XAt + o{At), (3.8) where o(At) here stands for terms which vanish faster than At as At goes to
40 Applications of hypergeometric and Poisson distributions
zero. We will have Tj e(t, t + At] if there are no events in (s, s + t] and one event in (s +1, s +1 + At]. By independence, the probability of both of these is the product of the probabilities of either occurring separately. Hence
Pr {T^iUt + At]} = e~\XAt + o{At)].
It follows that the density of 7\ is given by
Alternatively, this result may be obtained by noting that
Pr {Tj > t} = Pr {N(s, s +1) = 0} = e "kt.
We find that not only is the waiting time to an event exponentially distributed but also the following is true.
Theorem 3.4 The time interval between events is exponentially distributed with mean 1/1.
Proof This is Exercise 8.
In fact it can be shown that if the distances between consecutive points of occurrence are independent and identically exponentially distributed, then the point process is a homogeneous Poisson point process. This statement provides one basis for statistical tests for a Poisson process (Cox and Lewis,
The waiting time to the kih point of occurrence
Theorem 3.5 Let Tk be the waiting time until the kth event after s, k = 1, 2,.... Then Tk has a gamma density with parameters k and L
Proof The kth point of occurrence will be the only one in (s +1, s +1 + At~] if and only if there are k — 1 points in (s, s + t] and one point is in (s + t,s +1 + At]. It follows from (3.7) and (3.8) that
fTl(t) = t>0
1966).
Pr{Tke{t,t + At]} =
e-^Xtf-^XAt + ojAt)] (k-l)\
fe = l,2,....
Hence the density of Tk is
Mt)'
i>0
(3.9)
Poisson processes in Nature 41
Figure 3.4 The densities of the waiting times for 1,2 and 4 events in a homogeneous Poisson point process with 1=1.
and the mean and variance of Tk are given by
EOd-j, Var(Tk) = p
Note that this result can also be deduced from the fact that the sum of k > 1 independent exponentially distributed random variables, each with mean 1/A, has a gamma density as in (3.9) (prove this by using Theorem2.4). Furthermore, it can be shown that the waiting time to the kth event after an event has a density given by (3.9).
The waiting times for fe= 1,2 and 4 events have densities as depicted in Fig. 3.4. Note that as k gets larger, the density approaches that of a normal random variable (see Chapter 6, when we discuss the central limit theorem).
3.5 OCCURRENCE OF POISSON PROCESSES IN NATURE
The following reasoning leads in a natural way to the Poisson point process. Points, representing the times of occurrence of an event, are sprinkled randomly on the interval [0, s] under the assumptions:
(i) the numbers of points in disjoint subintervals are independent;
(ii) the probability of finding a point in a very small subinterval is
42 Applications of hypergeometric and Poisson distributions
proportional to its length, whereas the probability of finding more than one point is negligible.
It is convenient to divide [0,s] into n subintervals of equal length As = s/n. Under the above assumptions the probability p that a given subinterval contains a point is Xs/n where X is a positive constant. Hence the chance of k occupied subintervals is
Pr {k points in [0, s] } = b(k; n, p)
-Hr>
Now as n -*■ oo, Xs/n -> 0 and we may invoke the Poisson approximation to the binomial probabilities (see also Chapter 6):
b{k, n, p)-►---.
/t->co /c;
But np = n(Xs)/n = Xs. Hence in the limit as n -* oo,
exp (- Xs){Xsf
Pr {k points in [0, s] } =
as required.
The above assumptions and limiting argument should help to make it understandable why approximations to Poisson point processes arise in the study of a broad range of natural random phenomena. The following examples provide evidence for this claim.
Examples
(i) Radioactive decay
The times at which a collection of atomic nuclei emit, for example, alpha-particles can be well approximated as a Poisson point process. Suppose there are N observation periods of duration T, say. In Exercise 18 it is shown that under the Poisson hypothesis, the expected value, nk, of the number, Nk, of observation periods containing k emissions is
^Afexpf-nV fc = au_ (31())
where ii = XT is the expected number of emissions per observation period. For an experimental data set, see Feller (1968, p. 160).
(ii) Arrival times
The times of arrival of customers at stores, banks, etc., can often be approximated by Poisson point processes. Similarly for the times at which
Poisson processes in Nature 43
phone calls are made, appliances are switched on, accidents in factories or in traffic occur, etc. In queueing theory the Poisson assumption is usually made (see for example Blake, 1979), partly because of empirical evidence and partly because it leads to mathematical simplifications. In most of these situations the rate may vary so that X = k(t). However, over short enough time periods, the assumption that A is constant will often be valid.
(Hi) Mutations
In cells changes in genetic (hereditary) material occur which are called mutations. These may be spontaneous or induced by external agents. If mutations occur in the reproductive cells (gametes) then the offspring inherits the mutant genes. In humans the rate at which spontaneous mutations occur per gene is about 4 per hundred thousand gametes (Strickberger, 1968). In the common bacterium E. coli, a mutant variety is resistant to the drug streptomycin. In one experiment, N = 150 petri dishes were plated with one million bacteria each. It was found that 98 petri dishes had no resistant colonies, 40 had one, 8 had two, 3 had three and 1 had four. The average number n of mutants per million cells (bacteria) is therefore
Under the Poisson hypothesis, the expected numbers nk of dishes containing k mutants are as given in Table 3.2, as calculated using (3.10). The observed values Nk are also given and the agreement is reasonable. This can be demonstrated with a x2 test (see Chapter 1).
(iv) Voltage changes at nerve-muscle junction
The small voltage changes seen in a muscle cell attributable to spontaneous activity in neighbouring nerve cells occur at times which are well described as a Poisson point process. A further aspect of this will be elaborated on in Section 3.9. Figure 3.5 shows an experimental histogram of waiting times
n =
40x1 + 8x2 + 3x3+1x4
= 0.46.
Table 3.2 Bacterial mutation data*
k
N*(Obs.)
0
1
2
3
4
94.7 43.5 10.0 1.5 0.2
98 40 8 3 1
♦From Strickberger (1968).
44 Applications of hypergeometric and Poisson distributions
60 -
0 0.4 0.8 1.2 time (sec)
Figure 3.5 A histogram of waiting times between spontaneously occurring small voltage changes in a muscle cell due to activity in a neighbouring nerve cell. From Fatt and Katz (1952).
between such events. According to the Poisson assumption, the waiting time should have an exponential density which is seen to be a good approximation to the observed data. This may also be rendered more precise with a x2 goodness of fit test. For further details see Van der Kloot et al. (1975).
3.6 POISSON POINT PROCESSES IN TWO DIMENSIONS
Instead of considering random points on the line we may consider random points in the plane U2 = {(x,y)\ — oo < x < oo, — oo 0. (3.11)
Proof We will have J?j > r if and only if there are no events in the circle of radius r with centre at the fixed point under consideration. Such a circle has area nr2, so from the definition of a Poisson point process in the plane, the number of events inside the circle is a Poisson random variable with mean Xnr2. This gives
Pi{R1>r} = e-**r\
We must then have
d
which leads to (3.11) as required.
We may also prove that the distance from an event to its nearest neighbour in a Poisson forest has the density given by (3.11). It is left as an exercise to prove the following result.
Theorem 3.7 In a Poisson forest the distance Rk to the kth nearest event has the density
„ ex 2nkr(knr2f-le-kxrl I
/fik(r)==- (*-!)!- ' r>0' * = 1»2,....
Estimating the number of trees in a forest
If one is going to estimate the number of trees in a forest, it must first be ensured that the assumed probability model is valid. The obvious hypothesis to begin with is that one is dealing with a Poisson process in the plane. A few methods of testing this hypothesis and a method of estimating X are now outlined. For some further references see Patil et al. (1971) and Heltshe and Ritchey (1984). An actual data set is shown in Fig. 3.7.
Method 1 - Distance measurements
Under the assumption of a Poisson forest the point-nearest tree or tree-nearest tree distance has the density fRl given in (3.11). The actual measure-
Poisson point processes in two dimensions 47
1. ****•• . «• *■*■- • * i ' •*
• , • • ■ • B* ■ • «, • • i. •
• fm * .. ■ •
»• • • • • * ■
, • , ' • • • • • ■ *- ■ • *. ♦. .
_ •
*• *i * ■ • . •
* • • • s - •. „ •,. • » • •
• • * • 4 • *• •
• • • * . •
• a ■
* • . *. ■ • •. ■ • , '»
' ' ' • ■ • • • • * . • * • •■ ■ •
. ' ■"" • •
* • ■•■ • ■ " • t ■ • m - • m »
•
■ • " .*• • * . • " • . . • .
\
• • •
• • • • ■ • • • ■ ,
m * " ■ • ■ • « B * " , .
9 ■ * * • •
• •
...... • . • . . :• .* / .
• • • . . •• • « • • •
.....: . ■ ■ i*
1 a
> • ■
• •
- r
. , . *
• • • • ■ ■• ■«...*.■■.•■ *• ■ ■
*• • -
(*-' *• « # ■ « • •
'•"**. . ■ ', ■ ■ 1
* * •» • • • • • * • ■ • ..* * • • • * • • •
* ♦ *
■ •, • .. ..
• ♦.■ i. • •. • • *. •
• • .. • • . • • • • ■• • ., •
► * • • • • • •. • • • «. . • •..
• * ■ * • » ♦. .. •
» • ...
Figure 3.7 Locations of trees in Lansing Woods. Smaller dots represent oaks, larger dots represent hickories and maples. The data are analysed in Exercise 22. Reproduced with permission from Clayton (1984).
ments of such distances may be collected into a histogram or empirical distribution function. A goodness of fit test such as x2 (see Chapter 1) or Kolmogorov-Smirnov (see for example Hoel, 1971; or Afifi and Azen, 1979) can be carried out. Note that edge effects must be minimized since the density of R ! was obtained on the basis of an infinite forest.
Assuming a Poisson forest the parameter X may be estimated as follows. Let {X{,i= 1,2,...,n} be a random sample for the random variable with the density (3.11). Then it is shown in Exercise 21 that an unbiased estimator (see
48 Applications of hypergeometric and Poisson distributions
Exercise 6) of l/X is
n
An estimate of X is thus made and hence, if the total area A is known, the total number of trees may be estimated as XA. For further details see Diggle (1975, 1983), Ripley (1981) and Upton and Fingleton (1985).
Method 2-Counting
Another method of testing the hypothesis of a Poisson forest is to subdivide the area of interest into N equal smaller areas called cells. The numbers Nk of cells containing k plants can be compared using a x2-test with the expected numbers under the Poisson assumption using (3.10), with n = the mean number of plants per cell.
Extensions to three and four dimensions
Suppose objects are randomly distributed throughout a 3-dimensional region. The above concepts may be extended by defining a Poisson point process in R3. Here, if A is a subset of IR3, the number of objects in A is a Poisson random variable with parameter X\A\, where X is the mean number of objects per unit volume and \A\ is the volume of A. Such a point process will be useful in describing distributions of organisms in the ocean or the earth's atmosphere, distributions of certain rocks in the earth's crust and of objects in space. Similarly, a Poisson point process may be defined on subsets of R4 with a view to describing random events in space-time.
3.7 COMPOUND POISSON RANDOM VARIABLES
Let Xk,k = 1,2,... be independent identically distributed random variables and let N be a non-negative integer-valued random variable, independent of the Xk. Then we may form the following sum:
where the number of terms is determined by the value of N. Thus SN is a random sum of random variables: we take SN to be zero if N = 0. If N is a Poisson random variable, SN is called a compound Poisson random variable. The mean and variance of SN are then as follows.
SN = X1+X2+-+Xi
(3.12)
Theorem 3.8 Let E(X1) = ji and Var(^f1) =