Page 299 - A First Course In Stochastic Models
P. 299
OPTIMIZATION OF QUEUES 293
the number in system from M + 1 to M. If all of the s channels are busy, then
the time until the next event (service completion or new arrival) is exponentially
distributed with mean 1/(λ + sµ) and the next event is generated by an arrival
with probability λ/(λ + sµ). Putting the pieces together, we find
1 λ 1 sµ
τ (M,t) (s) = + =
λ + sµ λ + sµ sµ − λ (λ + sµ)(sµ − λ)
and
hM + rs λ h(M + 1) + rs hλ
c (M,t) (s) = K(t, s) + + + 2 .
λ + sµ λ + sµ sµ − λ (sµ − λ)
Also, by the last argument above,
sµ λ
p (M,t)(M−1,s) (s) = and p (M,t)(M,s) (s) = .
λ + sµ λ + sµ
For the other states of the embedded state space I, the basic elements of the
semi-Markov decision model are easily specified. We have
1
τ (i,t) (a) = , 0 ≤ i ≤ M − 1, 0 ≤ a ≤ s,
λ + min(i, a)µ
and
hi + ra
c (i,t) (a) = K(t, a) + , 0 ≤ i ≤ M − 1, 0 ≤ a ≤ s.
λ + min(i, a)µ
The one-step transition probabilities are left to the reader. Next we formulate the
value-iteration algorithm. In the data transformation we take τ = 1/(λ+sµ). Then
the recurrence relation (7.2.3) becomes
V n ((i, t)) = min {λ + min(i, a)µ}K(t, a) + hi + ra
0≤a≤s
λ min(i, a)µ
+ V n−1 ((i + 1, a)) + V n−1 ((i − 1, a))
λ + sµ λ + sµ
λ + min(i, a)µ
+ 1 − V n−1 ((i, t))
λ + sµ
for the states (i, t) with 0 ≤ i ≤ M − 1, 0 ≤ t ≤ s. For the states (M, t),
1 hλ
V n ((M, t)) = (λ + sµ)(sµ − λ)K(t, s) + + hM + rs
sµ sµ − λ
sµ − λ λ(sµ − λ)
+ V n−1 ((M − 1, s)) + V n−1 ((M, s))
λ + sµ sµ(λ + sµ)
sµ − λ
+ 1 − V n−1 ((M, t))
sµ
with the convention V n−1 ((−1, t)) = 0.