[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: PCI Memory Devices

> At 07:41 PM 9/11/96 -0700, you wrote:

Just to clarify this for the rest of the list, Eric was replying to a
private email I sent to him. Somehow he managed to send his reply to the
whole list, which was not my intent. Since it's out, however...

>> The reason is that PCI requires an awful lot of gates, it's way too slow
>> for DRAM, and you can't put enough devices on a PCI bus to handle the
>> actual number of DRAMs you need in a system, 8 to 32. This number is
>> correct no matter how long you wait.

> Is it really too slow for systems with 256K to 1M L2 caches these days?
> PCI burst rate is reasonably high and the trick would be to design an
> interface that is low latency. 3.3v @ 66Mhz coupled closely to the processor
> gives very nice performance.

Yes, PCI is way too slow to handle PC main-memory requirements no matter
how large the cache is, no matter how well you design the interface. PCI is
inadequate in terms of latency and bandwidth, and there's just no getting
around that. It's designed to be. This is a feature, not a bug! If PCI was
fast enough to be a main-memory bus, it would be too expensive to be a
peripheral bus.

> It doesn't make sense until you get to the point of 64M drams = 8 Megabytes
> on ONE chip. It really starts to make sense with 256M drams = 32 Megabytes
> on ONE chip! The logic required for PCI is MINISCULE compared to the
> DRAM array.

When you get to 64-Mbit DRAMs, in a year or two, you'll still need 8 to 32
of them in a system because you'll want 64M to 256M of DRAM in the system.
When we get to the 256-Mbit generation in 2000, we'll want 256M to 1G of
DRAM in every mainstream PC. Heck, even today, 64M is only $400 or so.
That's what people were spending for 12M configurations a year ago.

As I said, the number of DRAMs in a PC does not vary with time because
memory requirements go up in direct proportion to DRAM density. This isn't
a coincidence, it's that old invisible hand of the market guiding the
typing fingers of the guys at Microsoft and Apple and every other software
company on the planet.

> I would even wager to say that the logic for the PCI would be
> the same or even SMALLER than the logic required to implement EDO/PAGE/nibble
> and all the other crazy interfaces.

Well, this certainly isn't true. An efficient PCI interface has around 10K
to 20K gates of logic, plus anywhere up to 80K gates of buffer memory
depending on the application. Conventional DRAMs have only a few hundred
gates of logic. SDRAMs may have a couple of thousand gates of logic, but
that's it.

> The real benefit to the industry I see would be the simplification of the
> user interface. No more pouring over hundreds of timing numbers to try to come
> up with a common set of parameters to get second sources...

This is solved by the common SDRAM specifications released by JEDEC and
(slowly) being adopted by the SDRAM companies. These support 66 to 133 MHz
operation on arbitrarily deep and wide arrays of DRAM, not the 33 MHz,
32-bit, 4-slot interface you get with PCI.

.                    png