[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: interrupt latency...
- To: Mailing List Recipients <pci-sig-request@znyx.com>
- Subject: Re: interrupt latency...
- From: "David O'Shea" <daveo@corollary.com>
- Date: Mon, 16 Dec 1996 16:07:44 GMT
- Cc: jasper@utopia.hclt.com (Jasper Balraj)
- Old-Return-Path: <daveo@corollary.com>
- Resent-Date: Mon, 16 Dec 1996 16:07:44 GMT
- Resent-From: pci-sig-request@znyx.com
- Resent-Message-Id: <"16_8m1.0.RR4.NAOjo"@dart>
- Resent-Sender: pci-sig-request@znyx.com
Interrupt Latency is going to kill your throughput in this design.
It will also bring most OS to their knees. In the PC area, if you
are running on an 8259A interrupt controller, with a Unix OS, your
latency will be VERY high. (Because of the masking required to
implement Unix's interrupt "levels".) On NT or Win95, it is still
very high but not as bad. On Unix you might count on .3 millisecond.
(Pathetically slow). On something like NT, its going to be faster,
but still slow .1 millisecond. You are also correct that with PCI
devices, you will be on a software interrupt chain, and have to wait
for the unpredictable latency of other PCI drivers in the chain before
you. This will also kill the perfomance. Since those other drivers
will also have to check their interrupt status registers on each of
your 4000 interrupts a second.
Its going to be slow! Now for the almost-good news. If you primary
market is uniprocessor PC's, and there are only 3 or 4 PCI slots, and
the user has properly gone through configuration hell with today's
Plug and Pray BIOS, then your device may not be sharing an interrupt
with another device. [No guarantee, just matter of fact].
Why don't you put some intellegence on your adapter and have it
determine when to do the data transfer. The day of "dumb" controllers
is past. [I have worked on many "dumb" controllers.... those places
are all out of business now. Though they thrived 10-5 years ago.] Put a
processor on the board (i960 comes to mind) and stop interrupting the
host for every little thing. Just let him know when the transfer
is all done.
You will find that this is definitely the trend, and is getting less trendy
like and more "requirement" like. Notice that Microsoft has changed
the requirements for Windows 97 Hardware platforms for IDE controllers.
The IDE controllers now MUST support the DMA modes of ATA-3 so that
the controller can transfer all of the data on its own without interrupting
the host for each and every block of data (512 bytes). The DMA-IDE will
just interrupt on completion of transfer. Microsoft wants this
because a) the technology is already there, ATA-3 is defined and implemented,
and b) interrupt latency kills OS performance. Here they are concerned
with an interrupt per 512 bytes, not an interrupt per 8*4=32 bytes
that you are talking about.
I do not want to be too harsh. Your design will work. It will be simple.
It will be SLOW!!!!! and it will fail in the marketplace.
[Better for harshness early now, then latter in the marketplace. All of
this is definitely (obviosly) In-My-Humble-Option.]
Regards,
David O'Shea
corollary.com
At 06:06 PM 12/16/96 +0530, Jasper Balraj wrote:
>Hello!
>
>I'd like to know the exact way to calculate the interrupt latency
>in PCI bus. I understand, since the interrupt pins at the PCI connectors
>are shared, when there's an interrupt, the device driver has to read
>from the device's status register and check whether that device is the
>source of interrupt. If that device had not generated the interrupt, pass
>control to the previous device driver. So while calculating the interrupt
>latency, am I right that one has to take into account all this PCI bus reads,
>compare and jump instructions (for the worst case)? Is there any typical
>interrupt latency period value for the PCI bus? Why I am asking this is
>I am planning to have a 8 DWORD FIFO in my PCI controller. So after
>filling-up the FIFO, my on-board processor would generate an interrupt in
>the PCI bus to tell the host to put my PCI controller in the initiator
>mode and do bus master xfer directly to the host memory. According to
>my data xfer rates, my add-on processor may interrupt the host, some
>4000 times in a second to transfer 8 DWORDS of data each time thru' the
>FIFO. So the total time wasted because of interrupt latency itself, in
>a second, itself will be 4000 times that of a single interrupt. Will
>this create any problem with other PCI cards like graphics adapters,
>PCI-SCSI, or so.
>
>I'll be really glad to get any info. in this regard. Thanx in advance.
>
>-Jasper
>
>jasper@hclt.com
>
>
— ` P