Page 111 - Anatomy of a Robot
P. 111
03_200256_CH03/Bergren 4/17/03 12:27 PM Page 96
96 CHAPTER THREE
the DRAM memory chips external to the computer chip take a good long time to deliver
their contents to the inside of the computer chip, maybe 60 ns. That may not seem like
a long time, but if we consider that the computer chip may be able to execute instruc-
tions every 10 ns, it does waste a lot of time waiting for instructions to come out of
memory.
What the cache does is watch the access to external memory. If the cache control cir-
cuitry inside the computer chip believes it already knows what the contents of the mem-
ory address are, it cuts short the computer chip’s memory cycle and simply pulls the data
out of its own cache memory instead. This way, the instruction will be executed two to
six times faster. It’s easy to use cache since it’s transparent to the programmer. The cache
is simply turned on, and it automatically functions to speed up the program execution.
Many computer programs will execute in tight loops for short periods of time. The
execution of a FOR loop in C is a typical example. FOR loops will execute the same
instructions for a prescribed number of iterations. While executing in a FOR loop, a C
program will execute the same instructions over and over again. If these instructions are
put into the cache memory, the FOR loop will execute much more rapidly. As a general
rule, most programs will execute in such “local” loops a large percentage of the time.
This is the true power of using a cache memory structure within a processor. Even a
small amount of cache memory goes a long way. Generally, only the faster computer
chips have cache circuitry since only they can truly take advantage of it.
How does cache memory work? First, we’ll describe a more complex structure for
cache memory; later we’ll look at a simplification. First of all, cache memory usually
has just a few thousand words. Each of these words can contain both a full memory data
word (duplicating the contents of a DRAM memory address) and the DRAM memory
address itself. As the computer reads data from a DRAM address the first time, the cache
memory controller puts the data and the address into the cache memory at the same time.
Later, if the computer program reads that DRAM address, the cache memory recognizes
the address as a match, gets the computer’s attention, rapidly substitutes the data from
the cache, and cuts the memory access short. As the program continues to access DRAM
addresses in a small “local loop,” all the data from those addresses is also put into the
cache memory. As the program continues to loop through those DRAM addresses, the
cache memory steps forward with the data and acts to speed up the computer. When the
program moves on to another portion of the program, new data is cached.
But what happens when the cache fills up? Generally, the cache controller has hard-
ware that examines the least used cache words. When a new location is required for
cache data, the controller then selects the least used cache location, dumps the old,
unused data from it, and puts the new cache data in it.
As a side note, when data is written into memory that is also cached, the data is writ-
ten into the cache memory at the same time as it’s written into the real DRAM. That