Memory Access Overview

In order to understand the meaning of the various timings on RAM, we need to first look at how memory is accessed. This is still only part of the overall memory subsystem, but it is the portion that relates directly to the memory timings. We will cover the remaining portions of memory access in a moment. Ignoring how a memory request actually gets to the RAM modules, then, the pattern for a memory access is as follows.

First, the requested address arrives at the memory module. In a worst case scenario, the address is not contained in one of the currently active rows of memory (also called memory pages), so an active row is flushed out, the new row is requested, the row becomes active, and after a slight delay, the specific column in the row can be requested. There is another delay while the column is accessed, and then the data begins coming across the memory bus. The whole column is not sent across the bus immediately, but is sent in several bursts instead. The number of bursts used in transmitting the data is referred to as the burst length, and these bursts occur at the effective data rate - i.e. two bits per clock on DDR/DDR2 and one bit per clock for SDRAM.

That is a worst case scenario, but luckily, that is not the most common occurrence. Due to spatial locality - which says that if you access one piece of data at address X, you will likely also access the data at X+1, X-1, X+2, X-2, etc. - memory has what is called active rows/pages. These are rows that are stored currently in what amounts to a small cache on the memory chips - this is called a "sense amplifier" - and when a request arrives for data that is already stored in an active row, only the request for a specific column is needed. Row sizes are typically 1KB or 2KB on current DRAMs and column sizes vary according to several other factors such as device width and burst length.

A further explanation of the actual layout of memory is also important. We have talked about rows and columns being accessed, but there is still more to the overall structure. As we had mentioned, rows are also called memory pages, but pages are further grouped into memory banks. Banks of memory can be thought of as something like the set associativity of a cache - each bank can only have one active row at a time. If a second page within a memory bank is requested, the open page must be closed before the new page can be opened. In certain situations, if two different pages within the same bank are requested in rapid succession, additional delays can occur, as a page must remain active for a minimum amount of time. Increasing the number of banks reduces the chance of this happening, so generally speaking, having more banks can help to improve memory performance. There is also a trade off, however, as increasing the number of banks requires additional logic in the memory controller and other components related to the memory subsystem.

Keep this initial explanation of the RAM access pattern in mind when we talk about timings in a moment, but now we need to go back a step and refine our description of how memory is accessed.

Index Refining the Memory Access Description
Comments Locked

22 Comments

View All Comments

  • ariafrost - Tuesday, September 28, 2004 - link

    Good choice. You really don't want to get generic RAM... it is generally slow, unstable, and gives you the much-hated BSOD... I've only bought CAS 2 RAM (Corsair XMS) but I may consider buying some CAS 2.5 if the price delta isn't too great.
  • IKnowNothing - Tuesday, September 28, 2004 - link

    It's like you read my mind. I'm purchasing an Athlon 64 3500+ and wasn't sure if I should purchase generic RAM or high performance RAM.

    Cheers.

Log in

Don't have an account? Sign up now