Page 1130 - The Mechatronics Handbook
P. 1130

Direct-mapped cache, shown in Fig. 42.6(a) allows each memory block to have one place to reside within
                                 a cache. Fully associative cache, shown in Fig. 42.6(b), allows a block to be placed anywhere in the cache.
                                 Set-associative cache restricts a block to a limited set of places in the cache.
                                   Cache misses are said to occur when the data requested does not reside in any of the possible cache
                                 locations. Misses in caches can be classified into three categories: conflict, compulsory, and capacity.
                                 Conflict misses are misses that would not occur for fully associative caches with LRU (least recently used)
                                 replacement. Compulsory misses are misses required in cache memories for initially referencing a memory
                                 location. Capacity misses occur when the cache size is not sufficient to contain data between references.
                                 Complete cache miss definitions are provided in Ref. 4.
                                   Unlike memory system properties, the latency in cache memories is not fixed and depends on the
                                 delay and frequency of cache misses. A performance metric that accounts for the penalty of cache misses
                                 is effective latency. Effective latency depends on the two possible latencies, hit latency (L HIT ), the latency
                                 experienced for accessing data residing in the cache, and miss latency (L MISS ), the latency experienced
                                 when accessing data not residing in the cache. Effective latency also depends on the hit rate (H), the
                                 percentage of memory accesses that are hits in the cache, and the miss rate (M or 1 – H), the percentage
                                 of memory accesses that miss in the cache. Effective latency in a cache system is calculated as

                                                                                 (
                                                          L effective =  L HIT * H +  L MISS * 1 H)        (42.5)
                                                                                    –
                                   In addition to the base cache design and size issues, there are several other cache parameters that affect
                                 the overall cache performance and miss rate in a system. The main memory update method indicates
                                 when the main memory will be updated by store operations. In write-through cache, each write is imme-
                                 diately reflected to the main memory. In write-back cache, the writes are reflected to the main memory
                                 only when the respective cache block is replaced. Cache block allocation is another parameter and desig-
                                 nates whether the cache block is allocated on writes or reads. Last, block replacement algorithms for
                                 associative structures can be designed in various ways to extract additional cache performance. These
                                 include LRU, LFU (least frequently used), random, and FIFO (first-in, first-out). These cache management
                                 strategies attempt to exploit the properties of locality. Spatial locality is exploited by deciding which
                                 memory block is placed in cache, and temporal locality is exploited by deciding which cache block is
                                 replaced. Traditionally, when cache service misses, they would block all new requests. However, non-blocking
                                 cache can be designed to service multiple miss requests simultaneously, thus alleviating delay in accessing
                                 memory data.
                                   In addition to the multiple levels of cache hierarchy, additional memory buffers can be used to improve
                                                                                                       2
                                 cache performance. Two such buffers are a streaming/prefetch buffer and a victim cache.  Figure 42.7
                                 illustrates the relation of the streaming buffer and victim cache to the primary cache of a memory system.





















                                 FIGURE 42.7  Advanced cache memory system.

                                 ©2002 CRC Press LLC
   1125   1126   1127   1128   1129   1130   1131   1132   1133   1134   1135