Page 449 - A First Course In Stochastic Models
P. 449
444 APPENDICES
with the shape parameter α > 0 and scale parameter λ > 0. The corresponding
probability distribution function F(t) is given by
α
F(t) = 1 − exp[−(λt) ], t ≥ 0.
The mean and the squared coefficient of variation of the Weibull random variable
X are
1 1 2 Ŵ(1 + 2/α)
E(X) = Ŵ 1 + and c = − 1.
X
λ α [Ŵ(1 + 1/α)] 2
A unique Weibull distribution can be fitted to each positive random variable with
given first two moments. For that purpose a non-linear equation in α must be
numerically solved. The Weibull density is always unimodal with a maximum at
t = λ −1 (1 − 1/α) 1/α if c 2 < 1 (α > 1), and at t = 0 if c 2 ≥ 1 (α ≤ 1). The
X X
failure rate function is increasing from 0 to infinity if c 2 X < 1 and is decreasing
2
from infinity to zero if c > 1.
X
2
The gamma and Weibull densities are similar in shape, and for c < 1 the log-
X
normal density takes on shapes similar to the gamma and Weibull densities. For
the case c 2 ≥ 1 the gamma and Weibull densities have their maximum value at
X
t = 0; most outcomes tend to be small and very large outcomes occur only occa-
sionally. The lognormal density goes to zero as t → 0 faster than any power of
t, and thus the lognormal distribution will typically produce fewer small outcomes
than the other two distributions. This explains the popular use of the lognormal
distribution in actuarial studies. The differences between the gamma, Weibull and
lognormal densities become most significant in their tail behaviour. The densi-
1
2
2
α
ties for large t go down like exp[−λt], exp[−(λt) ] and exp[− [ln(t) − λ] /α ].
2
Thus, for given values of the mean and the coefficient of variation, the lognormal
density always has the longest tail. The gamma density has the second longest
tail only if α > 1; that is, only if its coefficient of variation is less than one.
In Figure B.1 we illustrate these facts by drawing the gamma, Weibull and log-
normal densities for c 2 = 0.25, where E(X) is taken to be 1. To conclude this
X
appendix, we discuss several useful generalizations of exponential and Erlangian
distributions. In many queueing and inventory applications there is a very substan-
tial (numerical) advantage in using the generalized distributions rather than other
distributions.
Generalized Erlangian distributions
An Erlang-k (E k ) distributed random variable can be represented as the sum of
k independent exponentially distributed random variables with the same means. A
generalized Erlangian distribution is one built out of a random sum of exponen-
tially distributed components. A particularly convenient distribution arises when
these components have the same means. In fact, such a distribution can be used
to approximate arbitrarily closely any distribution having its mass on the positive