Page 1129 - The Mechatronics Handbook
P. 1129

There are several properties of memory, including speed, capacity, and cost that play an important
                                 role in the overall system performance. The speed of a memory system is the key performance parameter
                                 in the design of the microprocessor system. The latency (L) of the memory is defined as the time delay
                                 from when the processor first requests data from memory until the processor receives the data. Bandwidth
                                 (BW) is defined as the rate at which information can be transferred from the memory system. Memory
                                 bandwidth and latency are related to the number of outstanding requests (R) that the memory system
                                 can service:
                                                                            L
                                                                     BW =  ---                             (42.4)
                                                                           R

                                   Bandwidth plays an important role in keeping the processor busy with work. However, technology
                                 tradeoffs to optimize latency and improve bandwidth often conflict with the need to increase the capacity
                                 and reduce the cost of the memory system.
                                 Cache Memory
                                 Cache memory, or simply cache, is a small, fast memory constructed using semiconductor SRAM. In
                                 modern computer systems, there is usually a hierarchy of cache memories. The top-level cache is closest
                                 to the processor and the bottom level is closest to the main memory. Each higher level cache is about
                                 5–10 times faster than the next level. The purpose of a cache hierarchy is to satisfy most of the processor
                                 memory accesses in one or a small number of clock cycles. The top-level cache is often split into an
                                 instruction cache and a data cache to allow the processor to perform simultaneous accesses for instruc-
                                 tions and data. Cache memories were first used in the IBM mainframe computers in the 1960s. Since
                                 1985, cache memories have become a standard feature for virtually all microprocessors.
                                   Cache memories exploit the principle of locality of reference. This principle dictates that some memory
                                 locations are referenced more frequently than others, based on two program properties. Spatial locality is
                                 the property that an access to a memory location increases the probability that the nearby memory location
                                 will also be accessed. Spatial locality is predominantly based on sequential access to program code and
                                 structured data. Temporal locality is the property that access to a memory location greatly increases the
                                 probability that the same location will be accessed in the near future. Together, the two properties ensure
                                 that most memory references will be satisfied by the cache memory.
                                   There are several different cache memory designs: direct-mapped, fully associative, and set associa-
                                 tive. Figure 42.6 illustrates the two basic schemes of cache memory, direct-mapped and set associative.

























                                 FIGURE 42.6  Cache memory: (a) direct-mapped design, (b) two-way set-associative design.

                                 ©2002 CRC Press LLC
   1124   1125   1126   1127   1128   1129   1130   1131   1132   1133   1134