[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: pci compliant devices?
Peter Marek wrote:
-snip-
> > Timing: The FPGA plus PCI core has to insure timing is met. The core
> > vendor has to test for minimum delays (e.g. min clock to out of 2 ns)
> > and guarantee the timing. Xilinx guarantees that our PCI cores meet PCI
> > timing.
> >
> Sure. But it's harder to achieve the timing in an FPGA than in an ASIC.
> 66MHz PCI, especially, is tough to meet with an FPGA structure.
66 MHz PCI is a serious design challenge. But what we bring to the table
is the PCI portion working at 66 MHz for you.
For example, the most difficult path is IRDY and TRDY to the CE on the
output FFs; you have to go through a level of logic and fan out to over
80 loads in 3 ns plus balance this with the clock for 0 ns hold time.
This is closer to 220 MHz, when you factor in the clock delay. This is
obviously a path few people should have to deal with.
ASICs have the same problem we did; this path has to be tuned to fall
exactly in this 3 ns window. It takes about three spins of the design to
achieve this. We did our three spins in a week or two; the ASICs
designers had to go back to the fab and respin the design, which can
take up to three months per spin. I haven't heard of anyone getting it
right on the first try.
I think the overall difficulty of dealing with this unregistered path
was a major motivating factor in doing PCI-X, which is full registered.
> > Protocol: The core has to correctly implement the PCI transaction rules.
> > All PCI core developers (including the ASSP and ASIC IP) have bugs in
> > their initial implementation. The key is to respin the design and remove
> > the bugs. Here FPGAs have an advantage as the design can be respun
> > quickly and retested, provided the FPGA vendor is putting the effort
> > into it. Xilinx has maintained a serious PCI effort, including fixing
> > issues and updating the PCI core to meet changes in the PCI spec and to
> > work with the latest Xilinx software, since we introduced our first
> > version in 1996. At this time we don't have any errata on our v3.0 PCI
> > core.
>
> It's an advantage as long as you have access to the VHDL source and expert
> knowledge on PCI. Mostly, IP customers have neither nor. IP customers who
> want to have the advantage of using an IP do not want to dig into the
> details of the IP. That's the idea behind the IP, isn't it ?
Actually we did our PCI core in Viewlogic, as the available FPGA
synthesis technology in 1995 couldn't achieve the speed we needed. As
more customers have wanted VHDL and Verilog support, we migrated to a
netlist approach, where the top level files were in HDL and blackboxed
the netlist.
Since our PCI core doesn't include FIFO and DMA in core, the only
portion customers in general needed to modify was configuration space.
We handled this with a HDL configuration file that allows most
modifications. The few config space mods customers wanted that we didn't
include, such as expansion ROM BARs were handled by me, or more recently
by our design services. MEMEC, who OEMs our PCI core, bundles several
days of design services with the core so the customer can have some mods
made at little to no extra cost.
In the early days, when we had errata for our core, there was a reason
to look inside and fix the core. But as we fixed these issues, the need
to modify it became less and less. Aside from these config space mods,
and since there are no current errata against the core, there isn't a
lot of reason to open up our PCI core. Sure we get a lot of interesting
requests, and we handle each of these on a per case basis, but in our
working out the bugs and adding PCI spec changes (remember the subsystem
vendor IDs?) we've made the current core very solid.
>
-snip-
> > > Another item is that PCI doesn't specify the time between reset and the
> > > first PCI bus access. This time may change from system to system, and
> your
> > > FPGA may still be loading its bitstream while the PCI BIOS first scans
> the
> > > bus...... you may choose to use PCI interface chips and/or non-volatile
> FPGA
> > > techniques (e.g. QuickLogic QuickPCI).
> >
> > The PCI specification does specify the time between reset and the first
> > configuration access.
> >
> > In the PCI 2.2 spec, Trhfa (RST# High to First Access) is specified as
> > 2^25 clocks. At 33 MHz, this is about 1 second. But this is a really old
> > issue that was resolved by an ECR to the 2.1 PCI specification. But even
> > if you're still working off the 2.1 PCI spec, this was never a real
> > issue as all PC vendors have a power-on reset that is usually measured
> > in hundreds of milliseconds. Even older FPGA families (such as the
> > Xilinx 4000 series) could meet this with a serial PROM in fast mode.
> >
>
> Ok, PCI2.2 specifies such a time... but not all systems are compliant with
> PCI2.2. Especially embedded systems will boot very quickly, and I have seen
> systems that do the first PCI configuration access in less than 1s.
In my experience, embedded systems designers usually have considerable
control over their systems; but in those cases one should design to the
system's specification. You do have a lot of choices in how fast you
configure the FPGA.
Out of 500+ customers and over 1200 designs, we've never had a problem
with this. But it was a FAQ before the Trhfa ECR was published.
> > > Moreover, volatile FPGAs will not work in mixed (open) 32bit/64bit
> designs,
> > > since the bus width is configured at the end of reset on PCI. probably,
> SRAM
> > > based FPGAs will not be fully configured at that time.
> >
> > This issue is the need to detect the state of REQ64# (and some other
> > signals in PCI-X) at the time RST# deasserts. This is only needed for
> > 64-bit PCI and PCI-X and doesn't matter for 32-bit PCI. Previously, this
> > was specified as 1 ms min and 100 ms typ in the 2.2 spec.
>
> Yes, but as soon as Intel will solve their Rambus issues on the latest
> chipsets, 64bit PCI will become reality. We will see a lot of mixed
> 32bit/64bit systems by the end of this year. Embedded Platforms like PowerPC
> boards for CompactPCI have 64bit PCI built into the Northbridge. So there's
> a strong urge for 64bit in the Embedded world. And the embedded market is
> said to top the PC market in the near furture, isn't it ?
Yes, and those who are building 64-bit motherboard should be following
both Intel's and the PCI SIG requirements of 100 ms minimum POR. The
only way this ECR was passed was the PCI SIG steering committee agreed
to it; Intel, IBM, Compaq, and others probably wouldn't have voted for
something that would have obsoleted their current 64 bit systems (and
warehouses full of power supplies!).
Interoperability isn't solely the responsibility of the FPGA vendor; MB
makers have specifications to follow as well. But let's give them
credit; I believe most MB vendors these days follow the specifications.
> >
> > To resolve this and give the SRAM FPGA vendors some time to configure,
> > an ECR was adopted by the PCI SIG. The new value, Tpvrh (Power Valid to
> > RST# high) is 100 ms for both PCI and PCI-X. The plug-in board designer
> > will need to evaluate that FPGA configuration method and insure that
> > they meet this specification. The largest announced Xilinx FPGA can meet
> > this using SelectMAP configuration. Smaller FPGAs can use a serial PROM
> > in fast mode.
>
> That's what I wanted to say. I just wanted to make HW engineers aware of
> some issues that are hidden below the stack of marketing material.
Agreed. But sometimes this stack of marketing material is used against
us by the nonvolatile FPGA vendors and I want hardware developers to
know the answers to these questions. We've worked hard to resolve these
issues and I think, come up with some reasonable solutions.
> >
> > Admittedly, this can require some extra effort to meet in some of the
> > larger FPGAs, but it was a fair compromise between the motherboard
> > vendors and the FPGA vendors (and the instant-on PC developers who want
> > 5 seconds max to start a PC). The challenge for SDRAM FPGA vendors is to
> > assure their largest FPGAs can configure in under 100 ms. Setting the
> > time at 100 ms means the PC vendors don't have to obsolete a lot of
> > hardware as most, if not all, were already meeting this spec (Intel was
> > already recommending 100 ms min for ATX power supplies when the ECR was
> > filed).
> >
>
> > This one is a bit newer than the Trhfa ECR, so everyone might not be
> > aware of it. More details can be found on the ECR page:
> > http://www.pcisig.com/tech/ecn_ecr.html (PCISIG members only)
> >
>
> The problem is that if you design an add-in board that you need to be
> compliant with both future and existing systems. So new ECRs are fine, but
> there are millions of PCI systems out there. It will take some time before
> these ECRs will be incorporated into the spec and it will take another time
> before most of the systems in the filed will be compliant with this ECR.
> BTw, ECR are EngineeringChange Requests, these may still not be integrated
> into the spec if they are not accepted within the technical comittees within
> the PCISIG. . ECRs need to become ECN (Engineering Change Note) before they
> may be considered a change of the spec. You can't build systems on
> speculations....
This ECR only affected the 64-bit systems; there probably aren't
millions of 64 bit systems out there yet. The few we looked at were
already compliant with this requirement. It's a reasonable design
practice to have a 100+ ms POR; this isn't going to prevent you from
building a 5 second turn-on PC.
The goal of the PCI industry is not to see how many interoperability
problems we can create but how many we can solve. The issues you have
raised are good questions and we've spent a lot of time to solve these
issues over the years. FPGAs have become a very significant player in
this industry and the PCI SIG has graciously worked with us to solve
these issues. And we will continue to work towards this goal as long as
PCI remains a useful bus standard.
Jim McManus
Xilinx PCI Applications Engineer