Page 72 - A Practical Guide from Design Planning to Manufacturing
P. 72

48   Chapter Two

          Cache memories are implemented using only transistors as Static
        Random Access Memory (SRAM). SRAM is a static memory because it
        will hold its value as long as power is supplied. This requires using six
        transistors for each memory bit instead of only one. As a result, SRAM
        memories require more die area per bit and therefore cost more per bit.
        However, they provide faster access and do not require the special
        DRAM processing steps used to create the DRAM cell capacitors. The
        manufacturing of DRAMs has diverged from that of microprocessors; all
        processors contain SRAM memories, as they normally do not use DRAM
        cells.
          Early DRAM chips were asynchronous, meaning there was no shared
        timing signal between the memory and the processor. Later, synchronous
        DRAM (SDRAM) designs used shared clocking signals to provide higher
        bandwidth data transfer. All DRAM standards currently being manu-
        factured use some type of clocking signal. SDRAM also takes advantage
        of memory accesses typically appearing in bursts of sequential addresses.
          The memory bus clock frequency is set to allow the SDRAM chips to
        perform one data transfer every bus clock, but only if the transfers are
        from sequential addresses. This operation is known as burst mode and
        it determines the maximum data bandwidth possible. When accessing
        nonsequential locations, there are added latencies. Different DRAM
        innovations have focused on improving both the maximum data band-
        width and the average access latency.
          DRAM chips contain grids of memory cells arranged into rows and
        columns. To request a specific piece of data, first the row address is sup-
        plied and then a column address is supplied. The row access strobe
        (RAS) and column access strobe (CAS) signals tell the DRAM whether
        the current address being supplied is for a row or column. Early DRAM
        designs required a new row address and column address be given for
        every access, but very often the data being accessed was multiple
        columns on the same row. Current DRAM designs take advantage of this
        by allowing multiple accesses to the same memory row to be made with-
        out the latency of driving a new row address.
          After a new row is accessed, there is a delay before a column address
        can be driven. This is the RAS to CAS delay (T RCD ). After the column
        address is supplied, there is a latency until the first piece of data is sup-
        plied, the CAS latency (T ). After the CAS latency, data arrives every
                                CL
        clock cycle from sequential locations. Before a new row can be accessed,
                                               ) to leave it ready for future
        the current row must be precharged (T RP
        accesses. In addition to the bus frequency, these three latencies are
        used to describe the performance of an SDRAM. They are commonly
        specified in the format “T CL  − T RCD  − T .” Typical values for each of these
                                          RP
        would be 2 or 3 cycles. Thus, Fig. 2-4 shows the operation of a “2-2-3”
        SDRAM.
   67   68   69   70   71   72   73   74   75   76   77