In general, if your device can't handle bursts, then it should take the responsibility to stop bursts and not simply rely on masters not attempting them.
In fact, making a non-burst slave is relatively easy. Whatever logic you are using to drive TRDY#, use that same logic to drive STOP#. This will ensure that you stop the transfer after the first word is transferred. This is valid regardless of whether the master intended a burst or not (ie: whether FRAME# is held low with IRDY# or not).
If the master attempted a burst, then you'll be stopping him. If he didn't attempt a burst, then you're doing overkill and both are agreeing not to burst. Both of these are legal.
Senior Hardware Engineer
Brocade Communications Systems
Note: I speak for myself, not for Brocade
From: Alex Horvath [mailto:firstname.lastname@example.org]
Sent: Friday, October 18, 2002 4:24 PM
Subject: Non-burst accesses
I'm designing an interface to a PCI target core and I have several 32 bit
registers that I have mapped in the I/O space. These registers can only be
written/read via single accesses (no burst).
While simulating my PCI environment I was unable to generate a single word
read. Although this seems to be a limitation of the simulator master model,
I am wondering if it is possible to guarantee that a master will never
attempt to burst from an I/O location. This would imply that accesses to a
particular I/O location would always be master terminated after the first
In general is it necessary to provide a method for target disconnect on
registers which cannot be accessed as part of a burst?