[asterisk-users] Echo and static on PRI with errors
Dave Platt
dplatt at radagast.org
Wed Jul 1 10:58:44 CDT 2009
> Could someone tell me how to set which IRQ the ISDN card picks up?
It's a multi-stage process.
Each PCI slot has four interrupt pins: INTA through INTD. A
PCI card can choose to use any of these four (or even more than
one of them, as some multi-port serial cards do). Most PCI cards
use only one pin: usually INTA.
The motherboard routes four interrupt lines between the pins
in the slots it provides. The motherboard usually does *not*
route a line to the same pin on all slots... for example,
INTA on slot 1 might be routed to INTB on slot 2 and INTC
on slot 3, and then back to INTA on slot 4. This "mix 'em up"
routing is done to help compensate for the fact that most
PCI cards use only INTA - it keeps all the cards from pounding
on the same interrupt line. This is also why one way to move
a PCI card to a different IRQ, is to move it to a different
slot.
The motherboard must then route the interrupt lines to
one or more IRQs. On "classic" PCI motherboards, with traditional
PC interrupt controllers, there are only a very limited number
of IRQs available (up through IRQ15) and many of these IRQs
have dedicated functions and cannot be shared (e.g. any IRQ
assigned to an ISA device can't be shared). As a result, these
motherboards tend to route multiple PCI interrupts to only one
or two IRQs - as in your case, where a whole boatload of things
are being routed to IRQ11. On these traditional motherboards,
all of the IRQ routing is under the control of the BIOS.
Hence, the second way to un-burden IRQ11 would be to change
your BIOS settings (as previously suggested). You would
want to disable any unused devices - in particular, any
IRQ-using ISA devices such as the parallel and serial ports -
and mark these IRQs as "available, not reserved for ISA". A
good BIOS would then change the PCI-INT-to-IRQ routing and
spread out the interrupt load.
Unfortunately, it sounds as if the HP BIOS is of the "Father
Knows Best" variety, and won't let you control your settings.
Unless you can find an "expert" menu, or a separate configuration
program for the BIOS data (sometimes vendors make a DOS or
self-booting program available, rather than putting the full
BIOS configuration in the BIOS itself) you're stuck here.
There's a third possibility: APIC, the "Advanced Programmable
Interrupt Controller". This is a newer interrupt-controller
architecture, present on SMP systems and on many modern
uniprocessor systems. It provides the hardware and the OS with
much more flexibility, and with quite a few additional IRQ
numbers not supported by the traditional controller.
You could try building a custom Linux kernel for your system,
using a current stable kernel version (a 2.6 spin, at the moment).
Enable APIC support, including the "APIC on uniprocessor" and "local
APIC" support features.
Boot this kernel, do an lspci -v, and see where your various
cards and devices end up IRQ'ing. You may find that the APIC
support has allowed the kernel to map these devices onto a
wider range of IRQ numbers than previously.
Unfortunately even this approach may not help on some
motherboards. If the vendor has wired all of the INTA pins
on the slots to the same line, and has also used this same
line for the interrupts from the internal (non-slotted) PCI
devices, then you'd be completely out of luck - it would be
physically impossible to distribute the interrupts from these
devices to different IRQs.
My guess is that your biggest conflict is between the PRI
card, and the network interfaces, since both are likely
to be generators of lots of interrupts.
Ugh... I just noticed something else... it looks as if
the motherboard in question is using at least one PCI-to-PCI
bridge:
81:01.0 Network controller: Tiger Jet Network Inc. Tiger3XX
Modem/ISDN interface
Note the bus number: 81 hex. I think that this means that the
card is sitting on the far side of a bridge chip... these are
often used if a system has more PCI devices or slots than a
single bus can support.
I've had some bad experiences with bridged PCI systems in the
past - some bridge chips seem to add quite a bit of latency to
PCI bus access, or reduce bus throughput by quite a lot. Apparently
the individual read and write transactions through the bridge
suffer from a significant per-transaction overhead. I wonder whether
IRQ latency/delay might not also be a problem here, or whether the
bridge architecture might be forcing interrupts from some cards
to use a single line/IRQ.
More information about the asterisk-users
mailing list