<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type">
<title></title>
</head>
<body bgcolor="#ffffff" text="#000000">
Steven Critchfield wrote:
<blockquote cite="mid1100988681.28985.113.camel@critch" type="cite">
<pre wrap="">On Sat, 2004-11-20 at 16:28 -0500, John Todd wrote:
</pre>
<blockquote type="cite">
<pre wrap="">I hadn't thought about SGI. Do they have any special hardware tricks
up their sleeves for perhaps doing codec transcoding in a more
efficient manner than in the "generic" main CPU?
</pre>
</blockquote>
<pre wrap=""><!---->
I don't know about special tricks other than the ability to add 4 way
Itanium bricks to the "cluster" and have it just work. They have a
special interconnect to have all the CPUs communicate via NUMA and
therefore to the OS it is as if they all are in the same motherboard. So
the benefit I see is in that you could get your DS3 and a couple of C
bricks(cpu components) and start off with a nice frac DS3 setup. As you
grow and need more channels and CPUs, you add another C brick and up
your capacity. Unfortunately I don't have any experience with it so I'll
leave it at that.
</pre>
</blockquote>
<br>
I think a great "trick" to do DSP operations like codecs on cheap
hardware is to use the power of the GPU. This is something that Mark's
been talking about in the past.<br>
<br>
Much of the work of audio DSP stuff seem to be similar to the work that
your off-the-shelf GPU can do much faster than GP CPUs can, and the
price/performance is great. You'd need to rewrite the codecs, and
other DSP routines for this, but programming to a meta-library like Sh
(<a class="moz-txt-link-freetext" href="http://libsh.sourceforge.net/">http://libsh.sourceforge.net/</a>) can allow your code to be portable
amongst several GPU backends.<br>
<br>
<pre>"The new GPUs are very fast. Stanford's Ian Buck calculates that the
current GeForce FX 5900's performance peaks at 20 Gigaflops, the
equivalent of a 10-GHz Pentiumラwith, according to Nvidia, even more
speed on the horizon. Performance growth has multiplied at a rate of 2.8
times per year since 1993, a pace analysts expect the industry to
maintain for another five years. At this rate, GPU performance will move
inexorably into the teraflop range by 2005. "</pre>
<br>
While moving to bigger, more flexible iron is always a strategy that
can be made to work, it's usually not the most cost-effective.<br>
<br>
If instead, we designed * such that you could build a cluster of
cheaper boxes, and have them operate as a cohesive unit such that if
one box fails, others could seamlessly take over, you'd still get much
better price/performance than if you used scalable/fault-tolerant
(read: expensive and complicated) hardware like this thread has been
discussing.<br>
<br>
The first step towards a true clustered solution might be designed such
that if a box fails, you lose the calls that were presently on the box,
but otherwise the system can keep running. As long as you can have a
cluster working together, and scaling to tens or hundreds of machines
operating as a seamless unit, that might be an acceptable solution, and
able to give you a decent number of 9's.<br>
<br>
<br>
<br>
<br>
</body>
</html>