[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: FW: Delayed Xaction and LOCK#
- To: Mailing List Recipients <pci-sig-request@znyx.com>
- Subject: Re: FW: Delayed Xaction and LOCK#
- From: "Monish Shah" <monish@mcsy2.fc.hp.com>
- Date: Tue, 27 Aug 1996 18:50:02 -0600
- In-Reply-To: Mark Gonzales <markg@scic.intel.com> "Re: FW: Delayed Xaction and LOCK#" (Aug 27, 3:37pm)
- References: <9608272237.AA25701@rs060.scic.intel.com>
- Resent-Date: Tue, 27 Aug 1996 18:50:02 -0600
- Resent-From: pci-sig-request@znyx.com
- Resent-Message-Id: <"avNRQ2.0.Ca2.gWv8o"@dart>
- Resent-Sender: pci-sig-request@znyx.com
My original comment:
> >> use of LOCK# should be avoided if at all possible.
To this, Dave Schneider replied:
> This could be a problem for us software types. The LOCK# signal is
> typically used for our mutual exclusion schemes (dining philosophers and
> all that). If we can't guarantee LOCK#, especially in a multi-master
> environment, what is the recommended way of doing mutual exclusion?
To this, Mark Gonzales replied:
> Just use locations in main memory for your CPUs' semaphores. You will
> get excruciatingly bad performance if you attempt to synchronize many
> CPUs by using locked CPU accesses to a semaphore in memory that resides
> on the PCI bus.
For mutual exclusion between multiple CPUs in an MP system, you'd
definitely want to use whatever semaphore capability that architecture
supports. That capability presumably relies on memory locations, and you
should certainly used *main memory* locations. That way, LOCK# on PCI
would be irrelevant. (BTW, the semaphore location in main memory should be
aligned to its size, or else you create problems for the system.)
> Do you really have a product that has *PCI* masters that are
> synchronizing amongst themselves using locked accesses to a semaphore?!?
Mutual exclusion between multiple PCI cards does sound far fetched. But,
if this is required, you could have the drivers for the cards participate
in semaphore scheme - each driver participating on its card's behalf. The
driver can then notify the card when the semaphore has been obtained. The
driver must also take the semaphore from the card before releasing it.
This way, the problem is reduced to mutual exclusion between CPUs.
However, I suspect that Dave was really talking about a programming model
where he needs mutual exclusion between a card and its driver. This can
also be accomplished without using LOCK#. Here's one possibility:
Design the card with a "semaphore" register. The register is 0 when no one
owns the semaphore. It is 1 when someone does. If the card wants the
semaphore, the hardware internal to the card checks the register. If it is
0, it sets the bit and considers the semaphore its own. This operation
must be done such that it appears atomic from a software perspective. Now,
when software wants the semaphore, it simply reads the register. The
hardware returns the existing state of the register as read data and then
sets the bit (if not already set). This must also be done atomically
w.r.t. the internal hardware that might request this semaphore. To release
the semaphore, software writes a 0 to the register.
The programming model is simple. Just read the register when you want the
semaphore. If the read returns a 0, you got the semaphore. If it returns
a 1, you didn't. To release the semaphore, write a 0.
This was just one example. I'm sure there are many schemes possible.
Monish Shah
Hewlett Packard
• ,