Page 158 - Applied Probability
P. 158

8. The Polygenic Model
                                                                                            143
                              (a) If B =(b ij ) is a square matrix with cofactor B ij corresponding to
                                   entry b ij , then the determinant det B =
                                                                          b ij B ij is expandable on

                                                                         j
                                                                                         −1
                                   any row i.If B is invertible as well, then its inverse C = B
                                                                                            has
                                                1
                                                   (B ji ).
                                   entries c ij =
                                               det B
                              (b) If B =(b ij ) is a square matrix, then the trace tr(B)of B is defined

                                   by tr(B)=    i ii . The trace function satisfies tr(BC) = tr(CB) for
                                                 b
                                   any two conforming matrices B and C.
                                                                           t
                                                                                t
                                                                                  t
                              (c) The matrix transpose operation satisfies (BC) = C B .
                                                                                 t
                              (d) The expectation of a random vector X =(X 1 ,...,X n ) is defined com-
                                                                          t
                                   ponentwise by E(X)= [E(X 1 ),..., E(X n )] . Linearity carries over
                                   from the scalar case in the sense that
                                                    E(X + Y )  = E(X)+E(Y )
                                                       E(BX)= B E(X)
                                   for a compatible random vector Y and a compatible matrix B.
                              (e) If B is a matrix and W is a random vector, then the quadratic form
                                      t
                                                                                       t
                                                             t
                                   W BW has expectation E(W BW) = tr[B Var(W)]+E(W) B E(W).
                                   To verify this assertion, observe that


                                              t
                                          E(W BW)    = E       W i b ij W j
                                                             ij

                                                     =      b ij E(W i W j )
                                                          ij

                                                     =      b ij [Cov(W i ,W j )+ E(W i )E(W j )]
                                                          ij
                                                                            t
                                                     = tr[B Var(W)] + E(W) B E(W).
                              (f) The partial derivative of a matrix B =(b ij ) with respect to a scalar
                                   parameter θ is the matrix with entries (  ∂  b ij ). Because the trace func-
                                                                     ∂θ
                                   tion is linear,  ∂  tr(B) = tr(  ∂  B). The product rule of differentiation
                                                ∂θ          ∂θ
                                   implies  ∂  (BC)= (  ∂  B)C + B  ∂  C.
                                          ∂θ         ∂θ        ∂θ
                              (g) The derivative of a matrix inverse is  ∂  B −1  = −B −1 (  ∂  B)B −1 .To
                                                                    ∂θ             ∂θ
                                   derive this formula, solve for  ∂  B −1  in
                                                             ∂θ
                                                           ∂
                                                   0=       I
                                                          ∂θ
                                                           ∂   −1
                                                      =     (B   B)
                                                          ∂θ
                                                            ∂  −1       −1  ∂

                                                      =       B    B + B     B.
                                                           ∂θ              ∂θ
   153   154   155   156   157   158   159   160   161   162   163