Page 6 - A First Course In Stochastic Models
P. 6
vi CONTENTS
3.2 Transient Analysis 87
3.2.1 Absorbing States 89
3.2.2 Mean First-Passage Times 92
3.2.3 Transient and Recurrent States 93
3.3 The Equilibrium Probabilities 96
3.3.1 Preliminaries 96
3.3.2 The Equilibrium Equations 98
3.3.3 The Long-run Average Reward per Time Unit 103
3.4 Computation of the Equilibrium Probabilities 106
3.4.1 Methods for a Finite-State Markov Chain 107
3.4.2 Geometric Tail Approach for an Infinite State Space 111
3.4.3 Metropolis—Hastings Algorithm 116
3.5 Theoretical Considerations 119
3.5.1 State Classification 119
3.5.2 Ergodic Theorems 126
Exercises 134
Bibliographic Notes 139
References 139
4 Continuous-Time Markov Chains 141
4.0 Introduction 141
4.1 The Model 142
4.2 The Flow Rate Equation Method 147
4.3 Ergodic Theorems 154
4.4 Markov Processes on a Semi-Infinite Strip 157
4.5 Transient State Probabilities 162
4.5.1 The Method of Linear Differential Equations 163
4.5.2 The Uniformization Method 166
4.5.3 First Passage Time Probabilities 170
4.6 Transient Distribution of Cumulative Rewards 172
4.6.1 Transient Distribution of Cumulative Sojourn Times 173
4.6.2 Transient Reward Distribution for the General Case 176
Exercises 179
Bibliographic Notes 185
References 185
5 Markov Chains and Queues 187
5.0 Introduction 187
5.1 The Erlang Delay Model 187
5.1.1 The M/M/1 Queue 188
5.1.2 The M/M/c Queue 190
5.1.3 The Output Process and Time Reversibility 192
5.2 Loss Models 194
5.2.1 The Erlang Loss Model 194
5.2.2 The Engset Model 196
5.3 Service-System Design 198
5.4 Insensitivity 202
5.4.1 A Closed Two-node Network with Blocking 203
5.4.2 The M/G/1 Queue with Processor Sharing 208
5.5 A Phase Method 209