ii >|< ■ nudum walks Equation (7.1) states that the values of X at all times prior to n — 1 have no effect whatsoever on the conditional probability distribution of X„ given X„_,. Thus a Markov process has memory of its past values, but only to a limited extent. The collection of quantities Pr for various n,skn and , is called the set of one-time-step transition probabilities. It will be seen later (Section 8.4) that these provide a complete description of the Markov process, for with them the joint distribution function of(Ar„, X„ X UX0), or any subset thereof, can be found for any n. Furthermore, one only has to know the initial value of the process (in conjunction with its transition probabilities) to determine the probabilities that it will take on its various possible values at all future times. This situation may be compared with initial-value problems in differential equations, except that here probabilities are determined by the initial conditions. All the random processes we will study in the remainder of this book arc Markov processes. In the present chapter we study simple random walks which are Markov processes in discrete time and with a discrete state space. Such processes are examples of Markov chains which will be discussed more generally in the next chapter. One note concerning terminology. We often talk of the value of a process at time t, say, which really refers to the value of a single random variable (X(t)), even though a process is a collection of several random variables. 7.2 UNRESTRICTED SIMPLE RANDOM WALK Suppose a particle is initially at the point x = 0 on the x-axis. At each subsequent time unit it moves a unit distance to the right, with probability p, or a unit distance to the left, with probability q, where p + q = 1. At time unit n let the position of the particle be X„. The above assumptions yield X0 = 0, with probability one, and in general, *„ = *n-i+Z„, n=l,2,..., where the Z„ are identically distributed with Pr{Z1 = + l} = p PrJZ, = -!} = «?. Unrestricted simple random walk 127 It is further assumed that the steps taken by the particle are mutually independent random variables. Definition. The collection of random variables X = {X0, Xx, X2,...} is called a simple random walk in one dimension. It is 'simple' because the steps take only the values ± 1, in distinction to cases where, for example, the Z„ are continuous random variables. The simple random walk is a random process indexed by a discrete time parameter (n = 0,1,2,...) and has a discrete state space because its possible values are {0, ± 1, + 2,...}. Furthermore, because there are no bounds on the possible values of X, the random walk is said to be unrestricted. Sample paths Two possible beginnings of sequences of values of X are {0, + l, + 2,+ 1,0,-1,0,+ l, + 2,+ 3,...} {0, - 1,0, - 1, -2, -3, -4, -3, -4, - 5,...} The corresponding sample paths are sketched in Fig. 7.2. 5 - '5 ! 10 Figure 7.2 Two possible sample paths of the simple random walk. I.'h : ■ 1111j.I< • i.iihIomi walks M,ni."» property A simple random walk is clearly a Markov process. For example, Pr {XA = 21*3 = 3,*;, = 2, Xx = 1,*0 = 0} = Pr{X4 = 2|*3 = 3!= Pr{Z4 = + l}=q. That is, the probability is q that X4 has the value 2 given that X3 = 3, regardless of the values of the process at epochs 0,1,2. The one-time-step transition probabilities are fp, if k = j + 1 pjk = Pr{Xn = k\X„^=j}= U if k =7-1 (0, otherwise and in this case these do not depend on n. Mean and variance We first observe that *„ = *0 + z, +z2 + --- + z„. Then, because the Z„ are identically distributed and independent random variables and X0 = 0 with probability one, and Now, and Thus £(*„) = £( £Zk ) = nE(Zl) Var(XJ = Var( £ Zt £(Z,)= lp + (-1)q = p-9 E(Z?) = lp+l« = p + «=l. Var(Z1) = £(Zf)-£2(Z1) = l-(p-Miiuiir probability distribution II \ 11 0, i Ik.ii xn = t zk, k= 1 where the Zk are i.i.d. random variables with finite means and variances. Hence, by the central limit theorem (Section 6.4), Xn-E(Xn) d N{0,1) as n-> oo. Since E(Xn) and a(Xn) are known from (7.2) and (7.3), we have Xn-n(p-q) d N(0,1). Thus for example, Pr {n(p - q) - \MyjAnpq < X„ < n(p - q) + 1.96^/4^} ~ 0.95. E(X ) = 0 when p = 0.5 Figure 7.3 Mean of the random walk versus n for p = 0.5 and p = 0.8 and normal density approximations for the probability distributions of the process at epochs n = 50 and n = 100. Random walk with absorbing states 131 After n = 10 000 steps with p = 0.6, E(X„) = 2000 and Pr{1808 <*10000 < 2192} ^0.95, whereas when p = 0.5 the mean is 0 and Pr{- 196 Dividing by p and letting q r = gives Pa + 1 Pa — r(Pa ~~ Pa - l)- 142 Simple random walks It is seen that when p = q and c = 10 and both players in the gambling game start with the same capital, the expected duration of the game is 25 rounds. If the total capital is c = 1000 and is equally shared by the two players to start with, then the average duration of their game is 250000 rounds! Finally we note that when c = oo, the expected times to absorption are f a p0}, indexed by t, is a continuous process in continuous time called a Wiener process or Brownian motion, though the latter term also refers to a physical phenomenon (see below). The Wiener process (named after Norbert Wiener, celebrated mathematician, 1894-1964) is a fascinating mathematical construction which has been much studied by mathematicians. Though it might seem just an abstraction, it has provided useful mathematical approximations to random processes in the real world. One outstanding example is Brownian motion. When a small particle is in a fluid (liquid or gas) it is buffeted around by the molecules of the fluid, usually at an astronomical rate. Each little impact moves the particle a tiny amount. You can see this if you ever watch dust or smoke particles in a stream of sunlight. This phenomenon, the erratic motion of a particle in a fluid, is called Brownian motion after the English botanist Robert Brown who observed the motion of pollen grains in a fluid under a light microscope. In 1905, Albert Einstein obtained a theory of Brownian motion using the same kind of reasoning as we did in going from random walk to Wiener process. The theory was subsequently confirmed by the experimental results of Perrin. For further reading on the Wiener process see, for example, Parzen (1962), and for more advanced aspects, Karlin and Taylor (1975) and Hida (1980). Random walks have also been employed to represent the voltage in nerve cells (neurons). A step up in the voltage is called excitation and a step down is called inhibition. Also, there is a critical level (threshold) of excitation of which the cell emits a travelling wave of voltage called an action potential. The random walk model of a neuron was introduced by Gerstein and Mandelbrot (1964), who also used the Wiener process as an approximation for the voltage. Many other neural models have since been proposed and analysed (see, for example, Tuckwell, 1988). REFERENCES Feller, W. (1968). An Introduction to Probability Theory and its Applications. Wiley, New York. Gerstein, G. L. and Mandelbrot, B. (1964). Random walk models for the spike activity of a single neuron. Biophys. J., 4, 41-68. Hida, T. (1980). Brownian Motion. Springer-Verlag, New York. Kannan, D. (1979). An Introduction to Stochastic Processes. North Holland, Amsterdam. Karlin, S. and Taylor, H. (1975). A First Course in Stochastic Processes. Academic Press, New York. Parzen, E. (1962). Stochastic Processes. Holden-Day, San Francisco. Shiryayev, A. N. (1984). Probability. Springer-Verlag, New York. Tuckwell, H. C. (1988). Introduction to Mathematical Neurobiology, vol. 2, Nonlinear and Stochastic Theories. Cambridge University Press, New York. KXERCISES 1. Given physical examples of the four kinds of random process ((a)-(d) in Section 7.1). State in each case whether the process is a Markov process. 2. Let X = {X0,XUX2,...} be a random process in discrete time and with a discrete state space. Given that successive increments X, — X0, X2 — Xl,...are independent, show that X is a Markov process. 3. For a simple random walk enumerate all possible sample paths that lead to the value XA = — 2 after 4 steps. Hence verify formula (7.5) for PrLY4=-2). 4. Let X„ = Xn_ l + Z„, n = 1,2,..., describe a random walk in which the Z„ are independent normal random variables each with mean ju and variance a2. Find the exact probability law of X„ if X0 = x0 with probability one. 5. In certain gambling situations (e.g. horse racing, dogs) the following is an approximate description. At each trial a gambler bets $m, assumed fixed. With probability q he loses all the $m and with probability p = 1 — q he wins back his $m plus a profit on each dollar which is a random variable with mean \i and variance a2. Let Xn be the gambler's fortune after n bets. Deduce that {X0,X1,X2, ■■■} is a random walk with X0 = x0, the gambler's initial capital, and X„ = %n- i + Z„, n= 1,2,..., Z„ = m[/„y„ + (!-/„)],