Page 121 - Matrix Analysis & Applied Linear Algebra
P. 121
3.7 Matrix Inversion 115
3.7 MATRIX INVERSION
If α is a nonzero scalar, then for each number β the equation αx = β has a
unique solution given by x = α −1 β. To prove that α −1 β is a solution, write
α(α −1 β)=(αα −1 )β = (1)β = β. (3.7.1)
Uniqueness follows because if x 1 and x 2 are two solutions, then
=⇒ α −1 (αx 1 )= α −1 (αx 2 )
αx 1 = β = αx 2
=⇒ (α −1 α)x 1 =(α −1 α)x 2 (3.7.2)
=⇒ (1)x 1 = (1)x 2 =⇒ x 1 = x 2 .
These observations seem pedantic, but they are important in order to see how
to make the transition from scalar equations to matrix equations. In particular,
these arguments show that in addition to associativity, the properties
αα −1 = 1 and α −1 α =1 (3.7.3)
are the key ingredients, so if we want to solve matrix equations in the same
fashion as we solve scalar equations, then a matrix analogue of (3.7.3) is needed.
Matrix Inversion
For a given square matrix A n×n , the matrix B n×n that satisfies the
conditions
and
AB = I n BA = I n
is called the inverse of A and is denoted by B = A −1 . Not all square
matrices are invertible—the zero matrix is a trivial example, but there
are also many nonzero matrices that are not invertible. An invertible
matrix is said to be nonsingular, and a square matrix with no inverse
is called a singular matrix.
Notice that matrix inversion is defined for square matrices only—the con-
dition AA −1 = A −1 A rules out inverses of nonsquare matrices.
Example 3.7.1
If
a b
A = , where δ = ad − bc
=0,
c d
then
1 d −b
−1
A =
δ −c a
because it can be verified that AA −1 = A −1 A = I 2 .