[asterisk-users] Max number of PCIe cards
Patrick Lists
asterisk-list at puzzled.xs4all.nl
Tue Apr 3 13:06:25 CDT 2012
Hi Olivier,
On 04/03/2012 10:31 AM, Olivier wrote:
> For training sessions, I'm evaluating the possibility to use a single
> physical server to host 5 virtual servers, each with its own Dahdi
> PCIe card, instead of using 5 physical machines, hoping a single
> physical server would easier to transport, more quiet and cheaper to
> provision and maintain.
Nice idea. Hope you can pull it off.
> As you can guess, each machine shouldn't be heavy loaded (a couple of
> calls, for each).
> If that matters, each machine would get Asterisk 10.
> One virtual server would play PSTN role and provide E1/T1 connectivity
> to each 4 server, playing as our favorite B2BUA.
>
> I hope servers with 5 PCI slots can be found here and there but I'm
> worried about IRQs, timing issues and the likes.
Afaik PCIe has less problems with interrupt sharing than PCI. But timing
issues are always a risk when it comes to Virtualization. Maybe you can
improve it by using a quadcore or octocore cpu and pin each VM to its
own core(s). Reducing I/O also helps so create ramdrives and put stuff
on tmpfs where possible. Obviously you need enough ram in the box to
facilitate this.
> 1. Beside PCI assignment, which virtualization feature is required to
> build such machine ? In other words, is reading "PCI-assignment" in
> Xen, LXC or equivalent datasheet enough ?
Last time I tried this was with a Sangoma card trying to make it
available in a CentOS VM which ran on top of Fedora 15. At the time I
could not make it work. Maybe things have improved since then. And you
can easily test it by just setting up a CentOS 6.2 or Fedora 16 host (or
whatever your favorite distro is) with a card, create the VM and try to
make the card show up in the VM and get it recognized by DAHDI.
Alternatively, if it does not work, you could try the ISDN BRI route
with for example a Digium B410P, Sangoma B500/B700 or Eicon Diva Server
BRI card running on the host and attach a number of USB ISDN devices
(HFC-USB chipset) which are then passed to the VMs. That's assuming that
USB passthrough works better than PCIe passthrough. This requires the
mISDN code from misdn.eu but if you really need something to work and
your initial idea won't fly then this might be an option.
> 2. How many PCIe cards can be "safely" inserted inside a server,
> without any virtualization ? Does this figure change when virtualizing
Afaik the limit is determined by how many PCIe slots you have. That is
assuming the load is as light as you described. Blasting four 10G
Ethernet cards full with data will probably not work very well in this
scenario with a small PC.
Good luck! If you figure out a way to make this work please share your
experience on the list.
Regards,
Patrick
More information about the asterisk-users
mailing list