Page 182 - A Course in Linear Algebra with Applications
P. 182
166 Chapter Six: Linear Transformationns
Representing linear transformations by matrices:
The general case
We turn now to the problem of representing by matrices
linear transformations between arbitrary finite-dimensional
vector spaces.
Let V and W be two non-zero finite-dimensional vector
spaces over the same field of scalars F. Consider a linear
transformation T : V —> W. The first thing to do is to choose
and fix ordered bases for V and W, say
v
B = { i> v 2 . . . , v n } andC = {wi, w 2 . . . , w m }
respectively. We saw in 5.1 how any vector v of V can be
represented by a unique coordinate vector with respect to the
ordered basis B. If v = ciVi + • • • + c n v n , this coordinate
vector is
Similarly each w in W may be represented by a coordinate
vector [w]c with respect to C .
To represent T by a matrix with respect to these chosen
ordered bases, we first express the image under T of each
vector in B as a linear combination of the vectors of C, say
m
w
T(VJ) = aij-wi H 1- a m i w m = ^ a y' *
where the scalars. Thus [T(VJ)]C is the column vector
with entries aij,..., a mj. Let A be the m x n matrix whose
(i,j) entry is a^. Thus the columns of A are just the coordi-
nate vectors of T(vi),..., T(v n) with respect to C.