<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
2012-01-18 16:45, John Knight skrev:
<blockquote cite="mid:4F16E937.9040309@classiccitytelco.com"
type="cite">
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
Ah, apologies, I just re-read your given Asterisk version.
Indeed, I was using 1.8.5.0 at the time, not any 1.4.x release.<br>
<br>
Any digium timing card will work as an OpenVZ compatible dahdi
timing device, I've seen this work on both Virtuozzo and OpenVZ.
Setting it up, there's no difference in how you set up passthrough
access using DEVNODES to the device from /dev inside the
$CTID.conf file. Just make sure permissions inside the container
make it writable by the asterisk user.<br>
<div class="moz-signature"><br>
</div>
</blockquote>
Okay, I will try that procedure tonight. <br>
I'll also remove my Intel dual nic card, and the network bonds.<br>
<br>
After that, then only difference to the machine working and the
machine not working are i386 / amd6.<br>
And the os version - debian 5 / 6.<br>
<br>
Have you used 64 bit kernels (amd64) in your setup? Distribution?<br>
<br>
Thanks for your advices, it's very appreciated!<br>
<br>
/Johan<br>
<br>
<blockquote cite="mid:4F16E937.9040309@classiccitytelco.com"
type="cite"> On 1/18/2012 8:52 AM, Johan Wilfer wrote:
<blockquote cite="mid:4F16CEAF.6000209@jttech.se" type="cite">
<pre wrap="">2012-01-18 11:31, John Knight skrev:
</pre>
<blockquote type="cite">
<pre wrap="">Hi Johan,
I've run into a similar issue before. I didn't resolve the problem
per se, but I got around it by modifying modules.conf to disable the
loading of res_timing_timerfd.so and loaded res_timing_dahdi.so instead:
noload => res_timing_timerfd.so
load => res_timing_dahdi.so
Cpu load came back down and call quality has been excellent since.
Perhaps this might work for you?
</pre>
</blockquote>
<pre wrap="">Hi!
I think the timing support was included in asterisk in 1.6.1/1.6.2.
As I run 1.4 these modules are not available at all.
Do you run asterisk >1.6 and amd64?
Another option would be to port my dialplan to a newer version of
asterisk if this can resolve the issue.
A workaround I've been tinking about is to try to put a spare
Digium-card in the server just for timing, if there is something strange
with the soft dahdi timing.
I'm not very fond of the idea to rebuild everything on i386
architecture, but that's the last resort.
/Johan
</pre>
<blockquote type="cite">
<pre wrap="">On 1/18/2012 4:24 AM, Johan Wilfer wrote:
</pre>
<blockquote type="cite">
<pre wrap="">I'm in the process of replacing an old server with a new one and are
making som changes in the infrastructure, the biggest change in my eyes
is moving from i386 to AMD64 arch. Yesterday I began migrating some
users from the old to the new server.
After only 57 concurrent calls in abount 13 conferences the sound are
losing quality.
The server uses dahdi 2.6.0 for timing but no dahdi hardware.
dahdi_test gives results like this when the server is used like that:
100.000% 99.999% 99.994% 99.998% 99.999% 99.616% 99.614% 99.997%
99.998% 99.618% 99.615% 99.994% 99.987% 99.626% 99.628% 99.993%
99.626% 100.000% 100.000% 99.622% 99.999% 99.607% 99.604% 99.627%
99.621% 99.629% 99.627% 99.998% 99.622% 99.995% 99.621% 99.996%
Results from dahdi_test with only some calls active:
99.999% 99.999% 99.990% 99.998% 99.999% 99.995% 99.995% 99.993%
99.997% 99.993% 99.999% 99.998% 99.996% 99.996% 99.998% 99.998%
99.991% 99.998% 99.995% 99.995% 99.987% 99.985% 99.996% 99.995%
Looking at the cacti graphs the kernel uses 100% cpu (total 400% with 4
processor cores), when the problem above is present. Top does not show
this kernel-cpu that cacti shows, but this maybe is by design? Asterisk
is using about 15% cpu.
top - 19:32:06 up 20:57, 1 user, load average: 0.00, 0.00, 0.00
Tasks: 213 total, 1 running, 212 sleeping, 0 stopped, 0 zombie
Cpu(s): 7.4%us, 29.6%sy, 0.0%ni, 55.3%id, 0.0%wa, 0.0%hi, 7.7%si,
0.0%st
Mem: 12299332k total, 3967800k used, 8331532k free, 251432k buffers
Swap: 19529720k total, 0k used, 19529720k free, 2919456k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
30666 root 0 -20 539m 25m 6600 S 15 0.2 6:55.01 asterisk
738 root 20 0 19184 1444 1004 R 1 0.0 0:00.08 top
The old server (i386 Debian 5: Linux 2.6.26-2-openvz-686) can have 320
calls in conferences without this problem.
The new server (amd64 Debian 6: Linux 2.6.32-5-openvz-amd64) show these
problems after 50 calls..
Old server:
Hp dl360g5, 4 cpu Xeon E5420, 2.50GHz
run i386 with PAE and OpenVZ, Debian Lenny
uses the broadcom nic's on the motherboard
asterisk 1.4.42 in openvz container (uses /dev/dahdi for timing)
cacti shows cpu in kernel mode 80% with 320 active calls in conferences
New server:
Hp dl360g7, 4 cpu Xeon E5520, 2.27GHz
run amd63 with OpenVZ, Debian Squeeze
uses Intel nic's 82571EB for offloading the processor + nic bonding in
the kernel for failover.
asterisk 1.4.42 in openvz container (uses /dev/dahdi for timing)
cacti show cpu in kernel mod 100% with 57 active calls in conferences
This is a puzzle to me..
- Does anyone have experience with amd64 arch and dahdi for timing?
- Can Dahdi om amd64 be responsible for the high cpu in kernel mode?
- I have a spare Digium TE220, would it offload the server to use it as
a timing source only?
- How do I debug the high cpu usage by the kernel, can I break this
down by module in some way?
Many, many thanks!
</pre>
</blockquote>
</blockquote>
</blockquote>
<br>
</blockquote>
<br>
</body>
</html>