[Asterisk-Users] TDM-fxo card and zttest - logic probem?
Rich Adamson
radamson at routers.com
Fri Apr 22 14:53:02 MST 2005
Been playing around with zaptel/zttest utility and believe there is a
logic problem with this 83 line app. (The objective is to better
undertand missed frames, interrupts, etc, associated with the TDM
card. Maybe we can get a handle on why things like spandsp failures,
echo, etc, are occurring in some cases.)
When the app is run as "./zttest -v", it repeatedly shows:
8192 samples in 8190 sample intervals 99.975586%
8192 samples in 8190 sample intervals 99.975586%
8192 samples in 8190 sample intervals 99.975586%
8192 samples in 8190 sample intervals 99.975586%
implying the pci structure is running at about 99.975%
accuarcy. Following the logic in the app, that really says the
TDM card transfered 8,192 bytes of data in the equivalent
timeframe as what 8,190 bytes would have been moved.
In other words, we got the expected/wanted 8,192 bytes in
99.975% of the 1 second interval (indicating a better then
expected response, not worse).
Second, there is a rounding error in the calculation. In the
statement:
ms = (now.tv_sec - start.tv_sec) * 8000;
the number of 'seconds' is calculated for the time necessary to
receive something greater then 8,000 bytes from the TDM card.
That statement always results in 1 second (times 8000 bytes per
second to convert it into equivaltent byte counts).
The statement that immediately follows it:
ms += (now.tv_usec - start.tv_usec) / 125;
calculates the number of 'microseconds' (in addition to the seconds
from above), required to receive something greater then 8,000 bytes
from the TDM card. On my system, that result is 23,863 microseconds.
When converted to the equivalent number of bytes, it is 190.9 bytes.
Adding the two values together results in 8,190.9 bytes, however the
calculation drops everything to the right of the decimal (since the
value is stuffed into an integer variable).
Logic issues perceived include:
1. We "received" the expected 8,192 byes, period. There wasn't any
missed frames that could be detected. The data in "read" in 1024
byte junks until something greater then 8,000 bytes (SIZE 8000)
is received. One missed interrupt/frame is equivalent to 125,000
microseconds. So the measured 23,863 microseconds is "far less"
then a single interrupt (125,000 microseconds), or about 19% of
a single interrupt. (That would suggest a single missed interrupt
(or frame) would yield a 91.02% result in the display. At what
realistic percentage would problems arise? (It wouldn't appear
that 99.975% is a serious problem, but what value is?)
2. On my system, the total accumlative time was 1.023863 seconds to
receive the 8,192 bytes, when it was suppose to happen in 1.00 sec.
If the app displayed those values, now we know what were looking
for (23,863 usec of delay from something).
3. The entire zttest logic simply repeatedly reads data from the TDM
buffer. There is no support in this app for interrupts, so if the
interrupt service overhead (eg, scsi/ide/video delays) would impact
how the interrupts were handled, it wouldn't be detected in the app
logic. All we know is the time it took to get 8,192 bytes was
something slightly greater then 1 second. Is the clock on the TDM
card on frequency as an example? Who knows. So, that would imply
the zttest app isn't just measuring bus/OS efficency, but includes all
other imperfections including clock errors on the TDM board, and
apparently excludes interrupt servicing. Another app is probably
needed to narrow down the source of issues.
4. If we added logic to the app to simply read a TDM chip register as
fast as it can, measure and report that, would that not provide
some insight into how fast the pci bus and TDM card could respond
(at max speed)?
Can someone walk through the above and help me understand where my
logic might be less then accurate/reasonable?
Rich
More information about the asterisk-users
mailing list