Page 144 - A First Course In Stochastic Models
P. 144
136 DISCRETE-TIME MARKOV CHAINS
the piece by a new one and letting the piece operate for the present day. A preventive
replacement takes one day. A new piece has working condition i = 1. A piece whose
present working condition is i has the next day working condition j with known probability
q ij where q ij = 0 for j < i. The following replacement rule is used. The current piece is
only replaced by a new one when its working condition is greater than the critical value m,
where m is a given integer with 1 ≤ m < N.
(a) Define an appropriate Markov chain and specify its one-step transition probabilities.
(b) Explain how to calculate the long-run fraction of days the equipment is inoperative
and the fraction of replacements occurring in the failure state N.
3.15 Consider a stochastically failing piece of equipment with two identical components
that operate independently of each other. The lifetime in days of each component has a
discrete probability distribution {p j , j = 1, . . . , M}. A component in the failure state at the
beginning of a day is replaced instantaneously. It may be economical to preventively replace
the other working component at the same time the failed component has to be replaced. The
cost of replacing only one component is K 1 , while the cost of replacing simultaneously
both components equals K 2 with 0 < K 2 < 2K 1 . The control rule is as follows. Replace
a component upon failure or upon reaching the age of R days, whichever occurs first. If
a component is replaced and the other component is still working, the other component is
preventively replaced when it has been in use for r or more days. The parameters r and R
are given integers with 1 ≤ r < R.
(a) Define an appropriate Markov chain and specify its one-step transition probabilities.
(b) How can you calculate the long-run average cost per day?
3.16 A transmission channel transmits messages one at a time, and transmission of a message
can only start at the beginning of a time slot. The time slots have unit length and the
transmission time of a message is one time slot. However, each transmission can fail with
some probability f . A failed transmission is tried again at the beginning of the next time
slot. The numbers of new messages arriving during the time slots are independent random
variables with a common discrete distribution {a k , k = 0, 1, . . . }. Newly arriving messages
are temporarily stored in a buffer of ample capacity. It is assumed that the average arrival
rate of new messages is smaller than the average number of attempts needed to transmit
a message successfully, that is,
ka k < 1/f . The goal is to find the long-run average
k
throughput per time unit.
(a) Define an appropriate Markov chain with a one-dimensional state space and specify
its one-step transition probabilities.
(b) Can you give a recursive algorithm for the computation of the state probabilities?
Express the average throughput in terms of the state probabilities.
3.17 Messages arrive at a transmission channel according to a Poisson process with rate λ.
The channel can transmit only one message at a time and a new transmission can only start
at the beginnings of the time slots t = 1, 2, . . . . The transmission time of a message is one
time slot. The following access-control rule is used. The gate is closed for newly arriving
messages when the number of messages awaiting transmission has reached the level R and
is opened again when the number of messages awaiting transmission has dropped to the
level r, where the parameters r and R are given integers with 0 ≤ r < R. The goal is to
study the long-run fraction of lost messages as function of r and R.
(a) Define an appropriate Markov chain and specify its one-step transition probabilities.
(b) Show how to calculate the long-run fraction of lost messages.
3.18 In Example 3.5.1 we have determined for the GI/M/1 queue the customer-average
probability π j denoting the long-run fraction of customers who find j other customers
present upon arrival. Denote by the time-average probability p j the long-run fraction of
time that j customers are present for j = 0, 1, . . . . Use Theorem 3.3.3 and Lemma 1.1.4