Page 331 - Applied Probability
P. 331

15. Diffusion Processes
                              320
                                                               ∂


                                         ∞
                                                                               !
                                   =
                                                                  f(t, x)φ s (x, z)
                                            zf(t, x)φ s (x, z) −
                                                             2 ∂x
                                        −
                                                     1 ∂
                                                                2
                                   ≈ µ(t, x)f(t, x)s −
                                                     2 ∂x
                                                     1 ∂       2 − z ∞ 2  z φ s (x, z) dzf(t, x)    dz


                                   ≈ µ(t, x)f(t, x)s −    σ (t, x)f(t, x) s.
                                                     2 ∂x
                              Using equation (15.3), one can show that these approximations are good
                              to order o(s). Dividing by s and sending s to 0 give the flux
                                       ∂                              1 ∂     2
                                     −   Pr(X t ≤ x)= µ(t, x)f(t, x) −     σ (t, x)f(t, x) .
                                       ∂t                             2 ∂x
                              A final differentiation with respect to x now produces the Kolmogorov
                              forward equation
                                     ∂              ∂                 1 ∂  2    2
                                       f(t, x)= −      µ(t, x)f(t, x) +     σ (t, x)f(t, x) . (15.5)
                                     ∂t             ∂x                2 ∂x 2
                               As t tends to 0, the density f(t, x) concentrates all of its mass around the
                              initial point x 0 .
                              Example 15.2.1 Standard Brownian Motion
                                                 2
                                If µ(t, x) = 0 and σ (t, x) = 1, then the forward equation becomes
                                                   ∂            1 ∂ 2
                                                     f(t, x)  =      f(t, x).
                                                   ∂t           2 ∂x 2
                              At X 0 = 0 one can check the solution
                                                                  1    x 2
                                                     f(t, x)=   √    e −  2t
                                                                 2πt
                              by straightforward differentiation. Thus, X t has a Gaussian density with
                              mean 0 and variance t. Here is clear that X t becomes progressively more
                              concentrated around its starting point as t tends to 0.

                              Example 15.2.2 Transformations of Standard Brownian Motion

                              The transformed Brownian process Y t = σX t + αt + x 0 has infinitesimal
                                                                  2          2
                              mean and variance µ Y (t, x)= α and σ (t, x)= σ . It is clear that Y t
                                                                  Y
                                                                                  2
                              is normally distributed with mean αt + x 0 and variance σ t. The further
                              transformation Z t = e Y t  leads to a process with infinitesimal mean and
                                                                       2 2
                                                        2
                                                              2
                                                    1
                              variance µ Z (t, z)= zα + zσ and σ (t, z)= z σ . Because Y t is normally
                                                    2         Z
                              distributed, Z t is lognormally distributed.
   326   327   328   329   330   331   332   333   334   335   336