[Asterisk-Users] Bonded ethernet ports and *

Rolf Brusletto rbrusletto at ibsncentral.com
Wed Dec 14 06:58:33 MST 2005


Rich - Even though I mentioned ethernet failover, I might have made it still
a little too broad. The linux ethernet bonding module has been around for
years, and there are several modes the linux bonding module can use which
include:

mode=0 (balance-rr)
Round-robin policy: Transmit packets in sequential order from the first
available slave through the last. This mode provides load balancing and
fault tolerance.

mode=1 (active-backup)
Active-backup policy: Only one slave in the bond is active. A different
slave becomes active if, and only if, the active slave fails. The bond's MAC
address is externally visible on only one port (network adapter) to avoid
confusing the switch. This mode provides fault tolerance. The primary option
affects the behavior of this mode.

mode=2 (balance-xor)
XOR policy: Transmit based on [(source MAC address XOR'd with destination
MAC address) modulo slave count]. This selects the same slave for each
destination MAC address. This mode provides load balancing and fault
tolerance.

mode=3 (broadcast)
Broadcast policy: transmits everything on all slave interfaces. This mode
provides fault tolerance.

mode=4 (802.3ad)
IEEE 802.3ad Dynamic link aggregation. Creates aggregation groups that share
the same speed and duplex settings. Utilizes all slaves in the active
aggregator according to the 802.3ad specification.

What I was talking about was simply mode 1, active-backup. Some of our past
equipment's network interfaces had some issues with link up/down which could
only be traced back to the ethernet port itself, so using bonding to use two
ports for active/backup failover works very smoothly. Our policy is 500ms
mii monitor for link status, and then a wait of 500ms before actually
failing over for a total of about 1s of possible down time. This also
benefits us as we use redundant switches in our distribution layer so that
if one of the switches goes down, it automatically switches over. My
question was really more of the bonding module than anything else, and how
much more overhead it puts on. Most of the other modes(except 0) typically
require trunk ports or special switch setup, since my issues are not
bandwidth related, I've stayed away from them. I'd agree that nics are the
least concerning, but if you have an extra eth port, and aren't using it for
something already, why not make it a failover port..

Best regards, 

Rolf 



On 12/13/05 4:14 PM, "Rich Adamson" <radamson at routers.com> wrote:

> 
>> Hey all - I'm sure this has been done before, but I'm curious about how well
>> it works.. Typically we have all our servers setup for dual fast/gig
>> ethernet failover... I.e. bond0 slaves eth0 and eth1 and fails over between
>> the two. This together with dual p/s and raid1'd(at least) drives provides
>> for a pretty safe solution(aside from building up a second server). So I'm
>> courious thoughts/expectations/issues with doing network failover...
>> Probably is a moot point, but I thought I'd ask.
> 
> I've done profession network assessments for a large number of companies
> throughout the US and I've never ever seen bonded nics work as the
> implementor expected them to work.
> 
> If you think seriously about how well the underlying OS and drivers function,
> the length of the code path that must be executed to move packets from the
> application layer all the way through to the nic card, you'll find that
> most OS's are pressed very hard to keep a 1 gig interface running at max
> smoke. Combine that with the overhead of tcp (not udp), latency, and the
> typical tcp windowing, and its even worse.
> 
> I'd also be checking exactly how the bonding function works in the
> primary/backup arrangement as several implementations that I've seen do
> not handle shared mac addresses very well. That translates into arp table
> timeout issues that essentially negates the expected benefits (eg, session
> failures).
> 
> Could there be some good implementations? Probably, but just haven't seen
> any persoanlly as yet.
> 
>> From a VoIP perspective, a 100 meg nic interface can (in theory) handle
> 1,176 simultanous g711 (or about 3,000 g729) conversations. That is
> significantly greater then what can be handled from a processing perspective
> (assuming all conversations pass through asterisk code). If all
> conversations essentially involves canreinvite=yes, a 100 meg nic is still
> not the bottleneck.
> 
> Last, the bonding of two nics at the server level _requires_ the associated
> switch interface to support the exact same bonding algorithm. Historically,
> that has been a problem for many switch vendors.
> 
> Short answer... I'd never do it. Long answer... think in terms of high
> availability "systems"; the nic card is the least concerning.
> 
> 
> _______________________________________________
> --Bandwidth and Colocation provided by Easynews.com --
> 
> Asterisk-Users mailing list
> To UNSUBSCRIBE or update options visit:
>    http://lists.digium.com/mailman/listinfo/asterisk-users




More information about the asterisk-users mailing list