[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: RE: 1 IRQ line for all bridged dev? RE: prob w/PCI add-in cards w/PCI 2PCI bridge on it



> All:
> there's a clear definition of interrupt routing behind a 
> PCI2PCI bridge.
> Look at the PCI bridge specification for details.

No one was talking about violating int routing requirements.  The question
was: why select secondary side device numbers such that all secondary
devices are effectively connected to INTA on the primary side of the P2P
bridge?  

..snip 'duh' info..
> If you do use a single interrupt behind the bridge, your 
> boards will only
> work on platforms which use 1 system interrupt for all PCI 
> Interrupts or
> have a special well known interrupt wiring. 

No, that is not correct.  I could follow the PCI spec, create a
configuration in which all devices on the secondary side use one int
line-according to the spec, plug it into a slot on a compliant system, and
expect it to work.  Though, depending on how many devices are on that same
IRQ, the number of chained ISRs could at some point, get to be too many.  

..snip.., insert Michael's material that merits response..
> > There are hundreds of PC platforms worldwide which only 
> support 1 IRQ
> > line (INTA) per PCI slot.
> > In the case that a PCI card with a bridge requests more than 1 IRQ
> > signal
> > (example : device 0 and 1 are used instead of 0 and 4) you will get
> > resource allocation problems

I suspect that what is really happening on these systems is that each PCI
slot does indeed support INTA, B, C, & D, and that the P'n'P resource
allocation mechanism for one reason or another, is connecting two or more of
the PCI int lines to the same IRQ number.  Regardless, this should still
work (within reason) because PCI compliant ISRs are supposed to be capable
of sharing IRQs with other PCI devices.  Recall, PCI INTs are level
triggered and sharable.  You should not receive a resource allocation error
if at least one IRQ is available to PCI (ie, that same IRQ will just have to
be connected to INTA~D).

> If you want to distribute  your card as a stnadard board 
> level product, keep
> to the spec.

Exactly, don't break the spec.
-- BrooksL

> Regards,
> 
> Peter Marek
> General Director
> MarekMicro GmbH
> Kropfersrichter Str. 6-8
> D-92237 Sulzbach-Rosenberg
> Germany
> Phone: 049 - 9661 - 908 - 210
> Fax:      049 - 9661 - 908 - 100
> ----- Original Message -----
> From: Lange, Michael <lange@dvs.de>
> To: Mailing List Recipients <pci-sig-request@znyx.com>
> Cc: <pci-sig@znyx.com>
> Sent: Thursday, October 14, 1999 8:58 AM
> Subject: AW: 1 IRQ line for all bridged dev? RE: prob w/PCI 
> add-in cards
> w/PCI 2PCI bridge on it
> 
> 
> >
> >
> > -----Ursprüngliche Nachricht-----
> > Von: Lame Brooks-G14738 [mailto:Brooks_Lame@mcg.mot.com]
> > Gesendet am: Wednesday, October 13, 1999 9:47 PM
> > An: 'Lange, Michael'
> > Cc: 'PCISIGList'
> > Betreff: 1 IRQ line for all bridged dev? RE: prob w/PCI add-in cards
> > w/PCI 2PCI bridge on it
> >
> >
> >
> > ..snip..
> > > The best procedure for device selection is to use device
> > > numbers 4,8 and
> > > 12 ( = data line 20,24 and 28) only
> > > in the case that only 1 IRQ line (INTA) should be shared by
> > > all devices
> > > behind the bridge.
> > > /Michael
> >
> > Why would you want to do that?  Is it just a personal preference for
> > intr
> > organization?  I suppose I could see doing that if all your 
> devs behind
> > the
> > bridge were fixed and owned by one intr service routine.  -- BrooksL
> >
> > ....
> > There are hundreds of PC platforms worldwide which only 
> support 1 IRQ
> > line (INTA) per PCI slot.
> > In the case that a PCI card with a bridge requests more than 1 IRQ
> > signal
> > (example : device 0 and 1 are used instead of 0 and 4) you will get
> > resource allocation problems
> >
> > (driver under NT cannot be loaded or something like that).
> >
> > Of course this is platform dependent. On SPARC-based 
> Solaris computers
> > you can do whatever
> > you want with the 4 IRQ signals for each slot.
> >
> > This is our experience we have made during the last 2 years.
> >
> > /Michael
>