[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Yet another successor to PCI!



On Mon, 20 Mar 2000 09:05:57 -0800, Alan Deikman <Alan.Deikman@znyx.com>
wrote:

>I just happened to glance through EE Times and learned
>about another technology initiative for a bus/interconnect
>that is supposed to replace PCI and get rid of all its
>weaknesses.
>
>Here is my list of them so far:
>
>    Infiniband (formerly NGIO)
Not accurate. The NGIO group (Intel etc.) has merged with Future I/O (Sun,
etc.) to form System I/O, which was later renamed to Infiniband.

Infiniband technology is ethernet-like, at 2.5Gbit/s per link, with 1, 4 or 12
links in parallel. The link protocol supports hardware QoS by having up to 16
virtual lanes per link. On top of the link technology, Infiniband also defines
the network and transport layers (IPv6 based), as well as bus semantics
(Remote DMA). For reasonable performance, you are expected to handle at least
some of the networking/transport level in hardware.

Infiniband also defines 'verbs', which is an abstract API layer for
Infiniband. It defines the functionality required by the software driver,
whithout committing to a specific API (which is OS specific). This means that
in order to take full advantage of Infiniband, your O/S must support it.

>    PCI-X      (for those who favor small increments)
I'm sure many other other people other than me will be able to talk about this
new standard.

>    RapidIO    (see March 6 EE Times)
I understand that this technology originated from Motorola.
The white paper says this is a parallel bus using differential LVDS buffers
with 8 data bits, 1 control bit ("frame") and one clock signal in each
direction. This brings the RapidIO link to 40 wires.

Unlike Infiniband, RapidIO is much simpler and only provides bus semantics,
and doesn't require any special O/S support - it's fully transparent to the
software.

How all of these fit together:
------------------------------
Infiniband allows you to link multiple chassis together (even though the
standard also supports on-board peripherals, I don't see who would bother).
You gain the scalability, reliability, and hot insertion/removal.

You use PCI-X as your in-chassis expansion connector. In fact, you could think
of expansion chassis with with PCI-X expansion slots, linked by an infiniband
connector to your system.

As for RapidIO, you could use it for building large SMP system. Since the
RapidIO link is point-to-point, you would use switches to connect for example
4 RapidIO ports, 2 of them connected to CPU's and 2 connected to PCI-X
bridges.

>I get the feeling that there are probably more out there.
>Anyone care to comment?
There is also AMD's LDT technology, which are serial links used to connect
multiple processors. I guess it's similar to RapidIO, but AFAIK it's not
widely open at the moment. I'm also not sure if it's found in the current
generartion of Athlons, or if the links are coming out of the chipset (linking
multiple chipsets).

>----------------------------------------------------
>ZNYX Networks - Alan.Deikman@znyx.com - 510 249 0800

Udi Finkelstein