[Asterisk-Users] Hardware for Asterisk

Ulexus ulexus at mail.lifelabs.net
Sat Jan 17 00:07:41 MST 2004


On Friday, 16 January, 2004 12:27, Steven Critchfield wrote:
> On Fri, 2004-01-16 at 06:47, Andrew Kohlsmith wrote:
> > > If you value your data, don't use software raid. If you value
> > > performance don't use software raid. If you value uptime/stability
> > > don't use any raid on IDE.
> >
> > That's pure bullshit -- I use software RAID *specifically* because I
> > value my data.  I don't want to buy two hardaware RAID controllers to
> > have one sit on the shelf just in case the first dies... and if the
> > second dies you're SOL because they've lasted long enough that they're no
> > longer available.  Linux software RAID is available on any Linux system
> > and if the system blows up I can put the drives in another system and
> > *not* worry about it not being detected.
> >
> > As far as performance goes, I have some bonnie++ tests that I've run that
> > show that at least on the few systems I've tested, software RAID 1 beat
> > out hardware RAID 1 (these systems were IDE, SCSI-2 and Ultra320, with
> > DPT RAID controllers for SCSI on P4 and I think regular Promise IDE RAID
> > controllers on P3) -- not a huge difference in speed but one that at
> > least tosses your "if you value performance don't use software raid"
> > argument.
> >
> > Perhaps on a _heavily_ loaded server you might be right, but then again I
> > feel that you're stupid for letting a server get so loaded up that it
> > can't handle the simple mirroring algorithms in addition to normal file
> > servering functions without degrading performance to a noticable degree.
> >
> > I used to believe that HW RAID was the only way to go.  With RAID5 I
> > still feel that is true to an extent.  However if you're just mirroring
> > there is _no_ significant advantage to choosing hardware RAID over
> > software RAID. Not on IDE, and not on SCSI.  In fact, there are
> > advantages to choosing software RAID over hardware RAID, as I've
> > mentioned above.
>
> Have you experienced a hardware failure yet that you had to come back
> from? If you loose a drive, it is a high probability that you will loose
> the controller. So unless you have a add on card, or some motherboard
> with 4 IDE ports, you will corrupt the second drive of a mirror. If the
> second drive is corrupted, then you are only a hair above not having
> anything. If you don't trust that, check out the GOOD IDE raid
> controllers. You are only allowed to place 1 drive per port, and they
> only use 1 port on a IDE controller.

Now here we are seeing that you must have had a really abnormal, bad 
experience, or you are not talking from experience at all.  I have, in fact, 
used many software and hardware RAID configurations, and I have had a great 
many drive failures.  For mirroring, I use software RAID because is greatly 
superior due precisely for the reliance on the controller of any given 
hardware RAID array.  

Although I think it is very far-fetched to set such a high relational 
coefficient of drive failure to controller failure, (since I have had _far_ 
more drives fail than controllers) the facts that hardware controllers are 
both expensive (compared to free software) and rare (compared to any 
machine's normal IDE ports) culminates in my use of software RAID.  I can 
stick the good drive of any software-mirrored RAID array into _any_ other 
system (Linux OR Windows), boot up off my trusty rescue CD with software RAID 
and networking, and immediately recover data or functionality.  Further, this 
presumes that the machine which housed the failed drive is otherwise in a 
non-functional state.  If this is a false presumption, because I have RAIDed 
my boot partition the system boots just fine with only one working drive.

Even better, when I get the new drive, I can simply install and rebuild the 
array while I am on-line... a feature not all hardware RAID controllers have.

_My_ horror stories are those of single "brick outhouse" servers which all 
sorts of special hardware failing out in the field with an SCA drive and no 
SCA backplane/controller within 100 miles.

>
> Even the large NAS devices that use IDE have the IDE controller built
> into the sled that holds the drive and use PCI hotswap technology.
>
> I don't buy it that any truly redundant raid system is as fast in
> software as in hardware on a machine doing anything significant. In raid
> 1, you are double or more writing all data to the drives. in a read
> environment, it might be able to share the load out to more than 1 drive
> and help, but I don't expect it would be much better than a dedicated
> controller handling the load. Any load of a software raid solution takes
> processor time away from the processes it is trying to complete. So take
> our VoIP application, if I am spending time getting the voice recording
> to 2 or more drives and the software to get it there, you have
> significantly reduced the amount of time available to the CPU to handle
> the VoIP packets in a timely manner. This only gets worse as call volume
> goes up. If it is hardware raid, you know it will be a single write and
> the controller deals with the problems.

I agree that, from what I have seen, even software RAID-1 is significantly 
slower and more CPU intensive than any (real) hardware RAID array I have 
seen, but my former comments illustrate, to my view, that a simple two-disk 
software RAID-1 array is preferable in most mirroring situations.

That said, I agree that in VoIP, the fight is always against latency, so 
anything you can do to offload long-wait processing is a *good thing*.

>
> > > What matters as far as the computers being used is that you are
> > > unlikely to get your hands on a real server class motherboard without
> > > having bought it in a Dell or Compaq. It also matters as to the
> > > supporting
> >
> > Again I call bullshit -- Where do you think Dell and Compaq get their
> > motherboards from?  (ok compaq might actually manufacture them) -- I can
> > get server-class motherboards from Asus, Gigabyte, Intel, Tyan, and a
> > host of manufacturers without having to buy into the proprietary nature
> > of anything Name Brand.
>
> On server hardware, Dell has their own boards. IBM had their own boards.
> Compaq and HP also produce their own boards. Maybe they don't produce
> their own boards in the desktop models, but they do in the server class
> machines. While you can buy Intel, Tyan, and SuperMicro boards, I
> wouldn't consider any of the remaining ones you list as truly server
> class.

The have the OEMs produce the boards _for_ them you mean.  Again, however, I 
bring up the comparison of "brick outhouse" -- single-instance 
over-engineering to the detriment of redundancy -- and "commodity bazaar" -- 
redundancy and availability over all.


>
> > > hardware. If the PSU isn't quality enough, then it doesn't matter what
> > > motherboard you use. Dell doesn't want to deal with your system after
> > > sales. They will put a few extra dimes into the PSU so it stays in
> > > shape for a few more years. The companies you are most likely to
> > > purchase a case from will usually expect you to not come after them if
> > > the PSU fails. So why would they bother to spend the extra money to
> > > make the PSU last longer.
> >
> > I can also put some extra dimes into the power supply... or fans... or
> > anything.  Dell/Compaq/whoever does not mean high quality by default.
>
> Maybe not by default, but if you get into the hot swap PSUs you
> absolutely are talking quality.
>

What on Earth is leading you on these dis-contiguous associations today?   You 
have spent months giving dashing the logical fallacies and personal 
inconsistencies of others and now you make such comments as these?  Are you 
the Steven Critchfield I have been seeing posting?

Is it not conceivable (worse, I know it exists) that as a quality, 
well-engineered idea and product can be copied and or value-engineered by 
less thorough or scrupulous manufacturers?  This has happenned time and time 
again, and it will happen in the future as long as there is a free market.

One simply cannot rely solely on brand, concept, or rhetoric for one's 
information.  

> > > Also Dell is more likely to have a part to fix your machine in the mail
> > > within hours instead of you waiting till you can get to the store to
> > > purchase your replacement part before RMAing the part and waiting the
> > > couple of weeks for the replacement.
> >
> > This is true.
> >
> > > In general, you get what you pay for, and less so when you go bargain
> > > hunting. It all comes down to the same old problem of figuring out what
> > > your time and downtime are worth.
> >
> > Agreed.  Personally I'd rather have a complete second system on the shelf
> > that I can swap out within 15 minutes than rely on anyone plus a courier,
> > but that's just me.
>
> While I'll agree that a complete spare is a good idea, if you are
> looking for the bargains now, I don't have faith that you would also be
> the person who would buy 2 and leave the second untouched until failure
> occurs. I'll admit I couldn't leave a fully functioning machine just
> laying around not doing something.

This is definitely something we can agree upon.

--
Sean C. McCord




More information about the asterisk-users mailing list