[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Simulating/verifying PCI bidir requirements

I've seen results from Verilog bus monitors from two vendors
and I'm not impressed with their coverage. 

Even seen a bus model that drives PERR#==1 strongly during the 
required dead cycle, causing my monitor to fail.

I wonder if all PCI interface designs are really verifying some
of these things in simulation.

Stuff which seem critical for proper electrical/timing on the bus,
but seems easy to miss in a simulation model:

1) Sustained Tri-State signals are driven (by you) high in the last cycle 
you drive them.
(this one is tricky, because if you put in a dumb pullup model, it will
pull the undriven signal to 1 during the dead cycle, so your simulation
might "pass")

2) Dead cycle between drivers on Sustained Tri-State signals.
Again, the behavior of the pullup can hide the lack of a dead cycle, 
Easiest thing is to look for a weak resistor drive between strong drives.

3) Dead cycle between ad/cbe_l/par is easy, just look for z's.

4) Assuming you're driving outputs from a register, you probably don't want
the output register to switch values in the cycle you stop driving.
If the oe turn-off is slower than the data path out, this can cause glitches to 
the new data value,  which can blow the work you just to end your drive with a 
drive high.

Or do people just assume these glitches "filter out" thru the driver/bus?

5) Not relying on synchronous deassertion of SERR#

I'm mostly interested in what people do in verilog to verify 2).