Page 183 - A First Course In Stochastic Models
P. 183

176                 CONTINUOUS-TIME MARKOV CHAINS

                Denoting by {α i } the probability distribution of the initial state of the original
                process {X(t)}, we have the boundary conditions

                                 α(0, 1, j) = α j , α(0, 0, j) = 0,  j ∈ I 0
                and

                                α(0, 0, j) = α j , α(0, 1, j) = 0,  j ∈ I f .


                Example 4.5.3 (continued) The Hubble telescope problem
                Assume that the telescope is needed to make observations of important astronomical
                events during a period of half a year two years from now. What is the probability
                that during this period of half a year the telescope will be available for at least
                95% of the time when currently all six gyroscopes are in perfect condition? The
                telescope is only working properly when three or more gyroscopes are working.
                In states 1 and 2 the telescope produces blurred observations and in states sleep 2,
                sleep 1 and crash the telescope produces no observations at all. Let us number the
                states sleep 2, sleep 1 and crash as the states 7, 8 and 9. To answer the question
                posed, we split the state space I = {1, 2, . . . , 9} into the set I 0 of operational states
                and the set I f of failed states with
                                 I 0 = {6, 5, 4, 3} and I f = {2, 1, 7, 8, 9}.

                                                         1
                Before applying the algorithm (4.6.1) with t =  and x = 0.95t, we first use
                                                         2
                the standard uniformization method from Section 4.5 to compute the probability
                distribution of the state of the telescope two years from now. Writing α i = p 6i (2),
                we obtain the values
                   α 1 = 3.83 × 10 −7 , α 2 = 0.0001938, α 3 = 0.0654032, α 4 = 0.2216998,
                   α 5 = 0.4016008, α 6 = 0.3079701, α 7 = 0.0030271, α 8 = 0.0000998,

                   α 9 = 0.0000050
                for the data λ = 0.1, µ = 100 and η = 5. Next the algorithm (4.6.1) leads to the
                value 0.9065 for the probability that the telescope will be properly working for at
                least 95% of the time in the half-year that comes two years from now.


                4.6.2 Transient Reward Distribution for the General Case
                In the general case the continuous-time Markov chain {X(t)} earns a reward at rate
                r(j) for each unit of time the process is in state j and earns a lump reward of F jk
                each time the process makes a state transition from state j to another state k. It
                is assumed that the r(j)and the F jk are both non-negative. It is possible to extend
                the algorithm from Section 4.6.1 to the general case. However, the generalized
                algorithm is very complicated and, worse, it is not numerically stable. For this
   178   179   180   181   182   183   184   185   186   187   188