Page 158 - Matrices theory and applications
P. 158
−1
When n is not a power of 2, we obtain M
by computing the inverse of a
block-diagonal matrix diag(M, I), whose size N satisfies n ≤ N =2 < 2n.
We obtain J n ≤ j l ≤ C α π l . Finally, we have the following result.
Proposition 8.1.3 If the complexity P n of the product in M n (CC) is
α
bounded by c α n , then the complexity J n of inversion in GL n (CC) is
α
bounded by d α n ,where
8.1. The LU Factorization l 141
3c α α
d α = 1+ 2 .
2 α−1 − 1
That can be summarized as follows:
Those who know how to multiply know also how to invert.
8.1.3 Complexity of the Matrix Product
The ideas that follow apply to the product of rectangular matrices, but for
the sake of simplicity, we present only the case of square matrices.
As we have seen above, the complexity P n of matrix multiplication in
3
M n(k)is O(n ). However, better algorithms will allow us to improve the
exponent 3. The simplest and oldest one is Strassen’s algorithm, which
uses a recursion. We note first that there exists a way of computing the
product of two 2 × 2 matrices by means of 7 multiplications and 18 addi-
tions. Two features of Strassen’s formula are essential. First, the number of
multiplications that it involves is stricly less than that (eight) of the naive
algorithm. The second is that the method is valid when the matrices have
entries in a noncommutative ring, and so it can be employed for two matri-
ces M, N ∈ M n (k), considered as elements of M 2 (A), with A := M n/2 (k).
This trick yields
2
P n ≤ 7P n/2 +9n /2.
l
For n =2 ,we thenhave
l
9 4
−l
1−l
7 π l − 7 π l−1 ≤ ,
2 7
which, after summation from k =1 to l,gives
9 1
−l
7 π l ≤ ,
2 1 − 4/7
because of 4 < 1. Finally,
7
21 l
π l ≤ 7 .
2
l
When n is not a power of two, one chooses l such that n ≤ 2 < 2n and we
obtain the following result.

