[asterisk-users] Breaking news, but what happened? 11.000 channels on one server

John A. Sullivan III jsullivan at opensourcedevel.com
Tue Aug 25 15:51:23 CDT 2009


On Wed, 2009-08-26 at 06:18 +1000, Alex Samad wrote:
> On Tue, Aug 25, 2009 at 07:30:08PM +0200, Olle E. Johansson wrote:
> > 
> > 25 aug 2009 kl. 18.50 skrev John A. Sullivan III:
> > 
> > > On Tue, 2009-08-25 at 18:28 +0200, Olle E. Johansson wrote:
> > >> 25 aug 2009 kl. 16.20 skrev Olivier:
> 
> [snip]
> 
> > > mode
> > > in Linux on any old switch and it works reasonably well other than for
> > > some excessive ARP traffic.  However, as we found out the hard way  
> > > when
> > > building our Nexenta SAN, bonding works very well with many-to-many
> > > traffic but does very little to boost one-to-one network flows.  They
> > > will all collapse to the same pair of NICs in most scenarios and, in  
> > > the
> > > one mode where they do not, packet sequencing issues will reduce the
> > > bandwidth to much less than the sum of the connections.  Take care -
> > > John
> > 
> > That is very good feedback - thanks, John!
> > 
> > Which means that my plan B needs to be put in action. Well, I did  
> > create a new branch for it yesterday... ;-)
> 
> any thoughts of different media like 10G ethernet or infiniband ?
> 
> ><snip>
Yes, this is drifting a little off-topic but good network design does
provide the foundation for good Asterisk design.  If we have lots of
servers talking to lots of servers, bonding over Gig links works very
well.  But as we build fewer very big servers via virtualization or, as
in this case, trying to make a single large server do the work
previously handled by several, the network bandwidth becomes a huge
issue.  Because almost all bonding algorithms choose a single path for a
flow of data (usually based upon MAC address but sometimes on IP address
or even socket), bonding becomes less useful in these scenarios.  In
fact, it is even worse - even in cases where the OS stack (e.g., Linux)
will support bonding based upon data above the MAC layer, the switches
frequently do not and will again collapse several paths into one as soon
as the data crosses the switch.

Thus, for few-to-few traffic patterns, bigger pipes such as 10G are
better than bonded pipes.  Specifically, 10 bonded 1 Gbps links will
effectively yield 1 Gbps throughput as opposed to 1 10Gbps link yielding
10 Gpbs throughput.  As an aside, in our iSCSI work, we found latency to
be a huge issue if the file block size was small (e.g., Linux files - 4K
block size).  Thus, the lower latency of faster protocols is a huge
performance booster. This will not be so much of an issue with Asterisk
where the difference between 100 usecs and 10 usecs in negligible.
-- 
John A. Sullivan III
Open Source Development Corporation
+1 207-985-7880
jsullivan at opensourcedevel.com

http://www.spiritualoutreach.com
Making Christianity intelligible to secular society




More information about the asterisk-users mailing list