STAT/MATH 491

STAT/MATH 491
Solutions to homework 2.
5.4.1 E(Z m Zn)  E(Z m E(Z n Z m )) . Now given that Zm = k, Zn can be thought of as k
independent chains of length m – n. Thus E(Zn Z m  k)  kEZn m  k  n m so


E(Z m Zn )  E(Z m Z m  n m )   n m EZ m2 . Hence


Cov(Z m , Zn )  E(Z m Zn )  EZ m EZn   n m EZ m2   mn   n m EZ m2   2m   n mVarZ m
m
Cov(Z n , Z m )
m  1
n  m VarZ m
n  m  n  m /2
n m /2   1

 

so (Z m , Z n ) 
VarZ n
n  1
n  1
VarZ mVarZ n
using Lemma 2 on p. 172.
6.2.2 Since pis(n)  0 for n = n(i), and psi(n)  0 for all n, i does not intercommunicate with
s. Hence i and s are not in the same equivalence class. By Chapman-Kolmogorov
pii(n)   pij(n(i)) p(nn(i))
  pij(n(i)) p(nn(i))
  pij(n(i))  1  pis(n(i))  1 , so i is a transient state.
ji
ji
j
js
js
6.3.1 Assume r<1. The chain jumps to a randomly selected state whenever it hits 0. Then
it moves down or stays one step at the time until it hits 0 again. Hence all states
intercommunicate, and the chain is persistent. Assuming a0 > 0 the period is 1. The mean
recurrence time for 0 is clearly one (the step from 0 to a state j) plus the expected time to
leave state j (1/1-r) times the expected value of the distribution (a0,a1,...) or
1
1
 kak . For other states than one the calculation is similar, except that one must
1 r
separate out excursions for values lower than i (where i is not going to be reached).
6.3.3 (a) Assume 0<p<1/2. Since all states communicate they are all persistent. Since the
diagonal elements are nonzero the chain is aperiodic. To find the mean recurrence time,
solve for the stationary distribution:  0 (1  2 p)   1 p   0
 0 2 p   1 (1  2 p)  (1   0   1 )2 p   1
whence  = (1/4,1/2,1/4). Thus, using theorem 3 on p. 227, the mean recurrence time for
states 0 and 2 are each 4, while that for state 1 is 2. I do not find it easy to compute the nstep transition probabilities. Here is one approach: in order to go from 0 to 0 in n steps we
can either go from 0 to 0 n times, or we can go from 0 to 1, then go from 1 to 1 in n-2
steps, and then go from 1 to 0. Since we cannot go directly from 0 to 2 we do not need to
worry about that route. Thus the diagonal elements of Pn satisfy
(n)
p00
 (1  2 p)n  2 p 2 p11(n  2)
(n  2)
(n  2)
p11(n)  2 p 2 p00
 (1  2 p)n  2 p 2 p22
(n)
p22
 2 p 2 p11(n  2)  (1  2 p)n
We can solve these equations by substituting the first and third into the second, which
then will be a difference equation (see handout on web site; the characteristic polynomial
has a pair of complex conjugate roots which are not covered in the handout). Another
approach is to represent P = B BT where  is diagonal and B orthogonal. Then it is easy
to power up P.
(b) Assume 0<p<1/2. Again all states communicate, so the chain is persistent. Since the
diagonal elements are all zero the chain is periodic. The period is 2 (since you can get
from 0 to 0 by going 0 – 1 – 0 and the same for all the other states). The stationary
distribution satisfies the following system of equations:
 1 p   3 (1  p)   0
 0 (1  p)   2 p   1
 1 (1  p)   3 p   2
 0 p   2 (1  p)   3
 0  1   2   3  1
Adding the first and third equations we get  1   3   0   2 and using the fifth we see
that  1   3   0   2  12 . Adding the first and second and using  2  12   0 ,  3  12  1
we get that  0 p   1 (1  p)  14 . Finally adding the first and the fourth and using all the
previous relations we get  0  14 and thus   ( 14 , 14 , 14 , 14 ) . Hence the mean recurrence
time for each of the four states is 4. The calculation of pij(n) can be done using matrix
multiplication, but there does not seem to be a simple pattern, except that every other
matrix has the pattern of the original matrix (with entries f(p) and 1 – f(p) where f is a
polynomial of degree n in p) and every other have nonzero terms replaced by zeros, and
zero terms replaced by nonzero terms.
6.4.3 Since Xn  Xn 1  1  Yn , truncated at 0 and K, with Yn independent of Xn-1 it is clear
that the process is a Markov chain. The state space is {0,...,K} and pij  P(Yn  j  i  1) ,
for i > 0 and j < K, with p0 j  P(Yn  j) , p0,K  P(Yn  K) and p j,K  P(Yn  K  i  1)
for j ≥ 1. In other words, the transition matrix is
 p0 p1 p2 L pK 1 PK 
p p p L p
PK 
1
2
K 1
 0

 0 p0 p1 L pK  2 PK 1 
P
0 0 p0 L pK  3 PK  2 


L
L 
L L L L


p0
P1 
0 0 0 L

where Pl   pi . Solving for the stationary distribution we get the first equation
i l
 o p0   1 p0   0

1  p0
P
  0 1 . The second equation
from which we get  0   1  0 or  1   0
p0
p0
p0
becomes
( 0   1 )p1  p0 2   1
Using the expressions above we get  2   0
1  p0  p1
P
  0 22 . Continuing in the same
2
p0
p0
Pk
. Using that the stationary distribution must sum to one we see
pok
Pk
1
p0k
that  0  K
and  k  K
. In the geometric case we get p0  p and
Pi
Pk
 pi
 pk
i0
0
k0 0
fashion we get  k   0
(1   ) k
is a truncated geometric distribution
1   K 1

Pk   p(1  p)  (1  p) . Hence  k 
i
k
ik
1 p
where   
(assuming p>1/2).
 p 
If one interprets the description of the process differently, the state space becomes
{0,...,K-1}, and p00 = p0 + p1. A similar computation still works, but not quite as neatly.
6.4.8 At each time a surviving particle either dies (with probability p) or survives (with
probability 1-p), regardless of how long it has been in the chamber, and of how many
other particles there are (this is another way of saying that the geometric distribution is
memoryless). Hence, given that Xn-1 = k, the number surviving until time n is Bin(n,1-p).
Independently, a Po()-number of particles enter at time n. Hence the conditional
distribution of Xn, given that Xn-1 = k, has pgf e (s1) ( p  (1  p)s)k . Since   i pij   j
i
j
we multiply both sides by s and sum to get
G (s)   s j j   s j   i pij     i s j pij
j
e
j
 (s 1)
i
i
  ( p  (1  p)s)
i
i
e
j
 (s 1)
G ( p  (1  p)s)
i
Differentiating both sides we get
G (s)  e (s 1)G ( p  (1  p)s)  e (s 1) (1  p)G ( p  (1  p)s)
Letting s  1 we get G (1)    (1 p)G (1) so G (1)   i i 

. This is the expected
p
value of the stationary distribution. It seems reasonable to guess that the stationary
distribution is Poisson (since a binomial sample from a Poisson is Poisson). Attempting

G (s)  e p

e (s 1)e p
(s1)
the right-hand side of the equation above yields
( p(1 p)s 1)

 ep
(s 1)
, the desired result.