[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: FW: Delayed Xaction and LOCK#
- To: Mailing List Recipients <pci-sig-request@znyx.com>
- Subject: Re: FW: Delayed Xaction and LOCK#
- From: frank.story@tempe.vlsi.com
- Date: Wed, 28 Aug 1996 09:02:48 -0700
- Resent-Date: Wed, 28 Aug 1996 09:02:48 -0700
- Resent-From: pci-sig-request@znyx.com
- Resent-Message-Id: <"fUM7w1.0.lM6.Gh89o"@dart>
- Resent-Sender: pci-sig-request@znyx.com
LOCK# is also used to cover a potential deadlock scenario for PCI to
PCI bridges. See page 115 of the 2.1 spec for some discussion of the
problem.
Frank
> From: Mark Gonzales <markg@scic.intel.com>
>
> d_schneider@emulex.com writes
> >> From: "Monish Shah" <monish@mcsy2.fc.hp.com>
> >> Message-Id: <9608270951.ZM7619@hpfcmss.fc.hp.com>
> >> use of LOCK# should be avoided if at all possible.
> >
> >This could be a problem for us software types. The LOCK# signal is
> >typically used for our mutual exclusion schemes (dining philosophers and all
> >that). If we can't guarantee LOCK#, especially in a multi-master
> >environment, what is the recommended way of doing mutual exclusion?
>
> Just use locations in main memory for your CPUs' semaphores. You will
> get excruciatingly bad performance if you attempt to synchronize many
> CPUs by using locked CPU accesses to a semaphore in memory that resides
> on the PCI bus.
>
> Do you really have a product that has *PCI* masters that are
> synchronizing amongst themselves using locked accesses to a semaphore?!?
>
> --
> Mark Gonzales.
Frank Story frank.story@tempe.vlsi.com
VLSI Technology 602-752-6098
Computing Products Group
¢ ¼
ª