[asterisk-users] Call quality

Loic Didelot ldidelot at mixvoip.com
Tue Jul 1 19:03:33 CDT 2008


Hi its me again.

Here is the output of zttest of a completely idle system (no calls).
Acoording to some documents those values do not seem to be good.

The IRQ of my zaptel card is shared with other devices. But not sure if
this causes a problem.


lspci  -v | grep "IRQ 22" -B4 
00:0c.0 ISDN controller: Cologne Chip Designs GmbH ISDN network
Controller [HFC-8S] (rev 01)
	Subsystem: Cologne Chip Designs GmbH Unknown device b55b
	Flags: medium devsel, IRQ 22


00:0f.0 IDE interface: VIA Technologies, Inc. VIA VT6420 SATA RAID
Controller (rev 80) (prog-if 8f [Master SecP SecO PriP PriO])
	Subsystem: VIA Technologies, Inc. VIA VT6420 SATA RAID Controller
	Flags: bus master, medium devsel, latency 32, IRQ 22


00:0f.1 IDE interface: VIA Technologies, Inc.
VT82C586A/B/VT82C686/A/B/VT823x/A/C PIPC Bus Master IDE (rev 06)
(prog-if 8a [Master SecP PriP])
	Subsystem: VIA Technologies, Inc.
VT82C586/B/VT82C686/A/B/VT8233/A/C/VT8235 PIPC Bus Master IDE
	Flags: bus master, medium devsel, latency 32, IRQ 22



 

Opened pseudo zap interface, measuring accuracy...
99.989845% 99.979881% 99.987305% 99.987297% 99.988190% 99.986824%
99.987999% 
99.987701% 99.984970% 99.987892% 99.987587% 99.987595% 99.987885%
99.988968% 99.987885% 
99.989449% 99.987595% 99.989250% 99.988571% 99.987106% 99.990044%
99.990921% 99.986519% 
99.990822% 99.978127% 99.985054% 99.984482% 99.963478% 99.978722%
99.950005% 99.974609% 
99.955170% 99.969528% 99.967972% 99.964066% 99.979797% 99.962898%
99.976852% 99.980072% 
99.972946% 99.989937% 99.972359% 99.986908% 99.987694% 99.988770%
99.993660% 99.991516% 
99.992577% 99.993164% 99.992470% 99.984276% 99.991600% 99.983200%
99.992279% 99.979790% 
99.990036% 99.981544% 99.988770% 99.981346% 99.988182% 99.988190%
99.986717% 99.991211% 
99.986618% 99.986824% 99.987991% 99.988869% 99.989265% 99.987015%
99.987396% 99.987495% 
99.985657% 99.987396% 99.986229% 99.987206% 99.986908% 99.986618%
99.987411% 99.988579% 
99.989059% 99.987106% 99.986336% 99.987114% 99.988190% 99.983200%
99.958191% 99.986031% 
99.989357% 99.985939% 99.988678% 99.989746% 99.990341% 99.988762%
99.989159% 99.976067% 
99.991798% 99.962799% 99.976173% 99.972366% 99.962898% 99.972855%
99.951462% 99.983986% 
99.952049% 99.985733% 99.963776% 99.977440% 99.980186% 99.973915%
99.977333% 99.990341% 
99.969032% 99.995110% 99.988770% 99.989555% 99.991211% 99.992386%
99.990929% 99.992294% 
99.991119% 99.991997% 99.992088% 99.980865% 99.988670% 99.982712%
99.989059% 99.981934% 
99.982903% 99.981850% 99.989845% 99.981628% 99.989258% 99.872566%
99.988678% 
--- Results after 134 passes ---
Best: 99.995 -- Worst: 99.873 -- Average: 99.982824, Difference:
99.983075


Loic



On Wed, 2008-07-02 at 01:36 +0200, Loic Didelot wrote:
> Hi,
> I am using g711a everywhere.
> 
> I checked on a completely idle system (no calls at all) and idle CPU is
> dropping from 100% to 0% more than once per minute.
> 
> procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
>  r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa
>  1  0      0 891124   4644  42868    0    0     0    28 4047 85757  0 97  3  0
>  0  0      0 891124   4644  42876    0    0     0     0 4042 68342  0 94  6  0
>  0  0      0 891124   4644  42876    0    0     0     0 4042 72429  0 97  3  0
>  0  0      0 891124   4644  42876    0    0     0     0 4065 158878  0 100  0  0
>  0  0      0 891124   4644  42876    0    0     0     0 4033 59033  0 98  2  0
>  0  0      0 891124   4644  42876    0    0     0     0 4012 14464  0 96  4  0
>  0  0      0 891124   4652  42868    0    0     0    76 4013 19727  0 37 62  1
>  0  0      0 891124   4652  42876    0    0     0     0 4011 20225  0  4 96  0
>  0  0      0 891124   4652  42876    0    0     0     0 4011 23901  0 20 80  0
>  0  1      0 891124   4652  42876    0    0     0     4 4025 21165  0 40 55  5
>  0  0      0 891124   4660  42876    0    0     0    32 4028 20190  0  1 95  4
>  0  0      0 891124   4660  42876    0    0     0     0 4022 23295  0  0 100  0
>  0  0      0 891124   4660  42876    0    0     0     0 4111 20508  0  0 100  0
>  0  0      0 891124   4660  42876    0    0     0     0 4102 25239  0 30 70  0
>  0  0      0 891124   4660  42876    0    0     0     0 4112 23148  0  0 100  0
>  0  0      0 891124   4668  42868    0    0     0    52 4116 19031  0  0 100  0
>  1  0      0 891124   4668  42876    0    0     0     0 4110 21776  0  0 100  0
>  0  0      0 891124   4668  42876    0    0     0     0 4150 20332  0  0 100  0
>  0  0      0 891124   4668  42876    0    0     0     0 4114 26285  0  0 100  0
>  0  0      0 891124   4668  42876    0    0     0    32 4118 23029  1  0 99  0
>  0  0      0 891124   4668  42876    0    0     0     0 4121 23284  0  0 100  0
>  0  0      0 891124   4676  42868    0    0     0    60 4112 25232  0 36 64  0
>  0  0      0 891124   4676  42876    0    0     0     0 4134 21583  0 99  1  0
>  0  0      0 891124   4676  42876    0    0     0     0 4105 26029  0 100  0  0
>  0  0      0 891124   4676  42876    0    0     0    76 4143 22795  0 25 75  0
>  0  0      0 891124   4676  42876    0    0     0     0 4118 21418  0  0 54 46
>  0  0      0 891124   4676  42876    0    0     0     0 4108 25499  0  0 100  0
>  0  0      0 891124   4684  42868    0    0     0    52 4081 20778  0  0 100  0
>  0  0      0 891124   4684  42876    0    0     0     0 4011 25463  0 13 87  0
>  0  0      0 891124   4684  42876    0    0     0     0 4021 23502  0 86 14  0
>  0  0      0 891124   4684  42876    0    0     0     0 4015 21693  0  1 99  0
> 
> 
> On Tue, 2008-07-01 at 22:28 +0300, Tzafrir Cohen wrote:
> > On Tue, Jul 01, 2008 at 03:22:07PM -0400, Steve Totaro wrote:
> > > Run top along with the tool that indicated the high I/O and see what
> > > is going on.  Are you doing G729 or anything like that?
> > 
> > vmstat will probably provide more useful data (vmstat 1 etc. for a
> > continous run).
> > 
> 
> 
> _______________________________________________
> -- Bandwidth and Colocation Provided by http://www.api-digital.com --
> 
> AstriCon 2008 - September 22 - 25 Phoenix, Arizona
> Register Now: http://www.astricon.net
> 
> asterisk-users mailing list
> To UNSUBSCRIBE or update options visit:
>    http://lists.digium.com/mailman/listinfo/asterisk-users




More information about the asterisk-users mailing list