lotus

previous page: 2.13  How do I get a replacement for my buggy Pentium?
  
page up: PC Hardware FAQ
  
next page: 2.15  What happen to my 384k?

2.14 Memory terminology, what does it mean?




Description

This item is from the PC Hardware FAQ, by Willie Lim and Ralph Valentino with numerous contributions by others. (v1.25).

2.14 Memory terminology, what does it mean?

[From: cls@truffula.sj.ca.us (Cameron L. Spitzer)]

Read/write memory in computers is implemented using Random Access Memory chips (RAMs). RAMs are also used to store the displayed image in a video board, to buffer frames in a network controller or sectors in a disk controller, etc. RAMs are sold by their size (in bits), word width (how many bits can you access in one cycle), and access time (how fast you can read a location), among other characteristics.

SRAMs and DRAMs:

RAMs can be classified into two types: "static" and "dynamic."

In a static RAM, each bit is represented by the state of a circuit with two stable states. Such a "bistable" circuit can be built with four transistors (for maximum density) or six (for highest speed and lowest power). Static RAMs (SRAMs) are available in many configurations. (Almost) all SRAMs have one pin per address line, and all of them are able to store data for as long as power is applied, without any external circuit activity.

In a dynamic RAM (DRAM), each bit is represented by the charge on a *very* small (30-50 femptofarads) capacitor, which is built into a single, specialized transistor. DRAM storage cells take only about a quarter of the silicon area that SRAM cells take, and silicon area translates into cost. The cells in a DRAM are organized into rows and columns. To access a bit, you first select its row, and then you select its column. Unfortunately, the charge leaks off the capacitor over time, so each cell must be periodically "refreshed" by reading it and writing it back. This happens automatically whenever a row is accessed. After you're finished accessing a row, you have to give the DRAM time to copy the row of bits back to the cells: the "precharge" time.

Because the row and column addresses are not needed at the same time, they share the same pins. This makes the DRAM package smaller and cheaper, but it makes the problem of distributing the signals in the memory array difficult, because the timing becomes so critical. Signal integrity in the memory array is one of the things that differentiate a lousy motherboard from a high quality one.

EDO RAM:

Extended Data Out is a minor variation on the control logic in the DRAM chip that tells the output pin when to turn on.

In a "standard" (Fast Page Mode) DRAM, the output pin turns off as soon as the Column Address Strobe (CAS) pin goes false. The problem with that comes when you try to do a "burst" read cycle wherein Row Address Strobe (RAS) is held true while CAS toggles up and down real fast. The RAM only drives the data half the time and the other half the time is wasted. This makes a cache fill cycle take longer than it otherwise might, because the cache really can't look at the data unless the DRAM is driving it. (You can't store data on a PC board trace because of inductive kick and other effects. Trust me, you novice board designers out there.)

In an EDO (Nippon Electric Corp calls it Hyper Page Mode) DRAM, the output pin keeps driving until RAS and CAS *both* go false. Your cache can fill faster because the whole duration (grossly oversimplifying) is usable as sampling time.

(Why didn't they do it that way to begin with, some of you are asking. The EDO DRAM can't read and write in the same RAS cycle. The FPM can. That used to be important, but it's not a capability that PCs with caches happen to use.)

With today's (cost-oriented) SRAM and ASIC technology, only synchronous SRAMs can take much advantage of the extra bandwidth. That's why you don't get a big benchmark boost when you switch to EDO but leave your cache the way it was before. You have to upgrade both to see the improvement.

Because it's a minor control variation, the chip maker can do most of the wafer fabrication steps before deciding whether a wafer full of chips will be FPM or EDO. Both types can be made on the same process and circuit design, and tested on the same equipment. Therefore, once they all tool up to make it, EDO and FPM will cost about the same. Right now (July '95) EDO costs more only because it's still rare.

SIMMs and SIPPs

Through the 1970s, RAMs were shipped in tubes, and the board makers soldered them into boards or plugged them into sockets on boards. This became a problem when end-users started installing their own RAMs, because the leads ("pins") were too delicate. Also, the individual dual in-line package (DIP) sockets took up too much board area.

In the early 1980s, DRAM manufacturers began offering DRAMs on tiny circuit boards which snap into special sockets, and by the late '80s these "single in-line memory modules" (SIMMs) had become the most popular DRAM packaging. Board vendors who didn't trust the new SIMM sockets used modules with pins: single inline pinned packages (SIPPs), which plug into sockets with more traditional pin receptacles.

PC-compatibles store each byte in main memory with an associated check bit, or "parity bit." That's why you add memory in multiples of nine bits. The most common SIMMs present nine bits of data at each cycle (we say they're "nine bits wide") and have thirty contact pads, or "leads." (The leads are commonly called "pins" in the trade, although "pads" is a more appropriate term. SIMMs don't *have* pins!)

At the high end of the PC market, "36 bit wide" SIMMs with 72 pads are gaining popularity. Because of their wide data path, 36-bit SIMMs give the motherboard designer more configuration options (you can upgrade in smaller chunks) and allow bandwidth-enhancing tricks (i.e. interleaving) which were once reserved for larger machines. Another advantage of 72-lead SIMMs is that four of the leads are used to tell the motherboard how fast the RAMs are, so it can configure itself automatically. (I do not know whether the current crop of motherboards takes advantage of this feature.)

"3-chip" and "9-chip" SIMMs

In 1988 and '89, when 1 megabit (1Mb) DRAMs were new, manufacturers had to pack nine RAMs onto a 1 megabyte (1MB) SIMM. Now (1993) 4Mb DRAMs are the most cost-effective size. So a 1MB SIMM can be built with two 4Mb DRAMs (configured 1M x4) plus a 1Mb (x1) for the check-bit.

VRAMs:

In graphics-capable video boards, the displayed image is almost always stored in DRAMs. Access to this data must be shared between the hardware which continuously copies it to the display device (this process is called "display refresh" or "video refresh") and the CPU. Most boards do it by time-sharing ordinary, single-port DRAMs. But the faster, more expensive boards use specialized DRAMs which are equipped with a second data port whose function is tailored to the display refresh operation. These "Video DRAMs" (VRAMs) have a few extra pins and command a price premium. They nearly double the bandwidth available to the CPU or graphics engine.

(As far as I know, the first dual-ported DRAMs were built by Four- Phase Systems Inc., in 1970, for use in their "IV-70" minicomputers, which had integrated video. The major DRAM vendors started offering VRAMs in about 1983 [Texas Instruments was first], and workstation vendors snapped them up. They made it to the PC trade in the late '80s.)

Speed

DRAMs are characterized by the time it takes to read a word, measured from the row address becoming valid to the data coming out. This parameter is called Row Access Time, or tRAC. There are many other timing parameters to a DRAM, but they scale with tRAC remarkably well. tRAC is measured in nanoseconds (ns). A nanosecond is one billionth (10 e-9) of a second.

It's so difficult to control the semiconductor fabrication processes, that the parts don't all come out the same. Instead, their performance varies widely, depending on many factors. A RAM design which would yield 50 ns tRAC parts if the fab were always tuned perfectly, instead yields a distribution of parts from 80 to 50. When the plant is new, it may turn out mostly nominal 70 ns parts, which may actually deliver tRAC between 60.1 ns and 70.0 ns, at 70 or 85 degrees Celcius and 4.5 volts power supply. As it gets tuned up, it may turn out mostly 60 ns parts and a few 50s and 70s. When it wears out it may get less accurate and start yielding more 70s again.

RAM vendors have to test each part off the line to see how fast it is. An accurate, at-speed DRAM tester can cost several million dollars, and testing can be a quarter of the cost of the parts. The finished parts are not marked until they are tested and their speed is known.

 

Continue to:













TOP
previous page: 2.13  How do I get a replacement for my buggy Pentium?
  
page up: PC Hardware FAQ
  
next page: 2.15  What happen to my 384k?