Page 102 - Anatomy of a Robot
P. 102
03_200256_CH03/Bergren 4/17/03 12:27 PM Page 87
COMPUTER HARDWARE 87
quencies (multiplied by coefficients). A good drawing of superimposed sine waves
can be found at www.yorvic.york.ac.uk/ cowtan/fourier/ftheory.html.
The Fourier Transform has many variants, including the Fast Fourier Transform
(FFT) and the Discrete Cosine Transform (DCT). These transforms are commonly
used to remove noise and unwanted frequencies from an image or signal as fol-
lows. The image is transformed into a series of discrete frequencies. Then the
unwanted frequencies are erased (or the wanted frequencies are picked out). Either
way, the wheat is separated from the chaff. Then the inverse Fourier Transform is
computed to reconstruct the image, which is clearer and easier to understand than
the original. Suffice it to say, the FFT, and other transforms like it, use a series of
MAC operations.
In robots, FFTs can be used to identify objects in the field of vision. If FFTs are
performed on the digitized field of view, the robot’s DSP computer can look for
the FFT signatures of specific objects, rejecting all those objects that don’t con-
form. Interesting information on Fourier transforms can be found at www.
yorvic.york.ac.uk/ cowtan/fourier/fourier.html and at www.medialab.it/fourier/
fourier.htm.
Notes on DSP
DSP processors have special purpose hardware that speeds up the computations they must
perform. These hardware structures povide both increased accuracy and faster execution.
Arithmetic We’ve seen that one of the central features of a DSP processor is the MAC,
a hardware structure capable of executing a multiplication followed by an addition. This
arithmetic operation is performed on a digital representation of a number. Numbers can
be represented within a computer in a fixed-point format or a floating-point format. Be
aware that DSP processors come in these two versions and that the floating-point DSP
processor is much more expensive.
Fixed-point numbers are familiar to us as integers. A 16-bit fixed-point number can rep-
16
resent a of 2 65536 numbers. This range covers about 5 decades of range (< 100,000).
But there are some problems with fixed-point format. If we were to multiply two fixed-
point numbers like 60,000 50,000, the answer could not be represented in 16-fit fixed-
point format.
To solve such an overflow problem, we temporarily can invent a “16-bit floating point”
format. Such a format is impractical, but illustrative here. Many people are familiar with
scientific notation where a number can be represented as 2.71 10 , a very large num-
12
ber. Suppose we take our 16-bit number and divide up the bits differently, using 10 bits
as the “mantessa” to represent the 2.71 number and 6 of the bits as the “exponent” to