[asterisk-bugs] [JIRA] (PRI-96) Q931 not sending RELEASE after getting DISCONNECT

Richard Mudgett (JIRA) noreply at issues.asterisk.org
Wed Jan 16 14:13:46 CST 2013


     [ https://issues.asterisk.org/jira/browse/PRI-96?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Richard Mudgett updated PRI-96:
-------------------------------

    Description: 
Several times per day, I'm getting Disconnection Cause Codes 102's (recovery on timer expiry) which BT tell me are the result of Asterisk not sending
a RELEASE in response to a DISCONNECT.

I'm seeing a handfuls of these in 'bursts' on a system making/taking around 1500 calls per day and while the net result is that we loose a channel for 30 seconds while it times out, these instances are visible to the boss and I'm getting grief over them :-(

I've turned on pri debug and attached an example of one of these calls and
it does seem that BT are correct - there's no response sent to the DISCONNECT
message and the exchange sends a RELEASE with a cause code of 102 some 30 seconds after the original DISCONNECT was sent.

Curiously the logger always seems to output the same strangeness for the Cause data in these instances like so:-

{noformat}
[May 18 14:27:45] VERBOSE[4471] logger.c: < Protocol Discriminator: Q.931 (8)  len=16
[May 18 14:27:45] VERBOSE[4471] logger.c: < Call Ref: len= 2 (reference 215/0xD7) (Originator)
[May 18 14:27:45] VERBOSE[4471] logger.c: < Message type: RELEASE (77)
[May 18 14:27:45] VERBOSE[4471] logger.c: < [08 02 80 90]
[May 18 14:27:45] VERBOSE[4471] logger.c: < Cause (len= 4) [ Ext: 1  Coding: CCITT (ITU) standard (0)  Spare: 0  Location: User (0)
[May 18 14:27:45] VERBOSE[4471] logger.c: <                  Ext: 1  Cause: Normal Clearing (16), class = Normal Event (1) ]
[May 18 14:27:45] VERBOSE[4471] logger.c: < [08 05 82 e6 33 30 35]
[May 18 14:27:45] VERBOSE[4471] logger.c: < Cause (len= 7) [ Ext: 1  Coding: CCITT (ITU) standard (0)  Spare: 0  Location: Public network serving the local user (2)
[May 18 14:27:45] VERBOSE[4471] logger.c: <                  Ext: 1  Cause: Recover on timer expiry (102), class = Protocol Error (e.g. unknown message) (6) ]
[May 18 14:27:45] VERBOSE[4471] logger.c: <              Cause data:[May 18 14:27:45] VERBOSE[4471] logger.c:  33[May 18 14:27:45] VERBOSE[4471] logger.c:  30[May 18 14:27:45] VERBOSE[4471] logger.c:  35[May 18 14:27:45] VERBOSE[4471] logger.c:  (Timer T[May 18 14:27:45] VERBOSE[4471] logger.c: 3[May 18 14:27:45] VERBOSE[4471] logger.c: 0[May 18 14:27:45] VERBOSE[4471] logger.c: 5[May 18 14:27:45] VERBOSE[4471] logger.c: )
[May 18 14:27:45] VERBOSE[4471] logger.c: -- Processing IE 8 (cs0, Cause)
[May 18 14:27:45] VERBOSE[4471] logger.c: -- Processing IE 8 (cs0, Cause)
[May 18 14:27:45] VERBOSE[4471] logger.c: q931.c:3801 q931_receive: call 215 on channel 5 enters state 0 (Null)
[May 18 14:27:45] VERBOSE[4471] logger.c:     -- Channel 0/5, span 1 got hangup, cause 102
{noformat}

If it's any help, here's the timing of the instances this week (grep 'cause 102' /var/log/asterisk/full.X) so you can see the clumping:

Monday:

{noformat}
[May 17 09:41:38] VERBOSE[4472] logger.c:     -- Channel 0/7, span 2 got hangup, cause 102
[May 17 15:44:21] VERBOSE[4471] logger.c:     -- Channel 0/5, span 1 got hangup, cause 102
[May 17 15:44:26] VERBOSE[4471] logger.c:     -- Channel 0/1, span 1 got hangup, cause 102
[May 17 15:44:37] VERBOSE[4472] logger.c:     -- Channel 0/4, span 2 got hangup, cause 102
[May 17 15:46:11] VERBOSE[4472] logger.c:     -- Channel 0/9, span 2 got hangup, cause 102
[May 17 15:46:19] VERBOSE[4472] logger.c:     -- Channel 0/3, span 2 got hangup, cause 102
{noformat}

Tuesday:

{noformat}
[May 18 14:26:17] VERBOSE[4472] logger.c:     -- Channel 0/7, span 2 got hangup, cause 102
[May 18 14:26:24] VERBOSE[4472] logger.c:     -- Channel 0/8, span 2 got hangup, cause 102
[May 18 14:27:14] VERBOSE[4471] logger.c:     -- Channel 0/6, span 1 got hangup, cause 102
[May 18 14:27:45] VERBOSE[4471] logger.c:     -- Channel 0/5, span 1 got hangup, cause 102
{noformat}

Today (so far):

{noformat}
[May 19 11:03:27] VERBOSE[4472] logger.c:     -- Channel 0/6, span 2 got hangup, cause 102
[May 19 11:04:15] VERBOSE[4471] logger.c:     -- Channel 0/2, span 1 got hangup, cause 102
[May 19 11:04:23] VERBOSE[4472] logger.c:     -- Channel 0/5, span 2 got hangup, cause 102
[May 19 11:04:32] VERBOSE[4471] logger.c:     -- Channel 0/3, span 1 got hangup, cause 102
{noformat}




****** ADDITIONAL INFORMATION ******

Asterisk 1.4.31
LibPri 1.4.10.2

Digium, Inc. Wildcard TE205P (rev 02)
2 PRI isdns (one with 30 channels one with 8)

  was:
Several times per day, I'm getting Disconnection Cause Codes 102's (recovery on timer expiry) which BT tell me are the result of Asterisk not sending
a RELEASE in response to a DISCONNECT.

I'm seeing a handfuls of these in 'bursts' on a system making/taking around 1500 calls per day and while the net result is that we loose a channel for 30 seconds while it times out, these instances are visible to the boss and I'm getting grief over them :-(

I've turned on pri debug and attached an example of one of these calls and
it does seem that BT are correct - there's no response sent to the DISCONNECT
message and the exchange sends a RELEASE with a cause code of 102 some 30 seconds after the original DISCONNECT was sent.

Curiously the logger always seems to output the same strangeness for the Cause data in these instances like so:-

[May 18 14:27:45] VERBOSE[4471] logger.c: < Protocol Discriminator: Q.931 (8)  len=16
[May 18 14:27:45] VERBOSE[4471] logger.c: < Call Ref: len= 2 (reference 215/0xD7) (Originator)
[May 18 14:27:45] VERBOSE[4471] logger.c: < Message type: RELEASE (77)
[May 18 14:27:45] VERBOSE[4471] logger.c: < [08 02 80 90]
[May 18 14:27:45] VERBOSE[4471] logger.c: < Cause (len= 4) [ Ext: 1  Coding: CCITT (ITU) standard (0)  Spare: 0  Location: User (0)
[May 18 14:27:45] VERBOSE[4471] logger.c: <                  Ext: 1  Cause: Normal Clearing (16), class = Normal Event (1) ]
[May 18 14:27:45] VERBOSE[4471] logger.c: < [08 05 82 e6 33 30 35]
[May 18 14:27:45] VERBOSE[4471] logger.c: < Cause (len= 7) [ Ext: 1  Coding: CCITT (ITU) standard (0)  Spare: 0  Location: Public network serving the local user (2)
[May 18 14:27:45] VERBOSE[4471] logger.c: <                  Ext: 1  Cause: Recover on timer expiry (102), class = Protocol Error (e.g. unknown message) (6) ]
[May 18 14:27:45] VERBOSE[4471] logger.c: <              Cause data:[May 18 14:27:45] VERBOSE[4471] logger.c:  33[May 18 14:27:45] VERBOSE[4471] logger.c:  30[May 18 14:27:45] VERBOSE[4471] logger.c:  35[May 18 14:27:45] VERBOSE[4471] logger.c:  (Timer T[May 18 14:27:45] VERBOSE[4471] logger.c: 3[May 18 14:27:45] VERBOSE[4471] logger.c: 0[May 18 14:27:45] VERBOSE[4471] logger.c: 5[May 18 14:27:45] VERBOSE[4471] logger.c: )
[May 18 14:27:45] VERBOSE[4471] logger.c: -- Processing IE 8 (cs0, Cause)
[May 18 14:27:45] VERBOSE[4471] logger.c: -- Processing IE 8 (cs0, Cause)
[May 18 14:27:45] VERBOSE[4471] logger.c: q931.c:3801 q931_receive: call 215 on channel 5 enters state 0 (Null)
[May 18 14:27:45] VERBOSE[4471] logger.c:     -- Channel 0/5, span 1 got hangup, cause 102

If it's any help, here's the timing of the instances this week (grep 'cause 102' /var/log/asterisk/full.X) so you can see the clumping:

Monday:

[May 17 09:41:38] VERBOSE[4472] logger.c:     -- Channel 0/7, span 2 got hangup, cause 102
[May 17 15:44:21] VERBOSE[4471] logger.c:     -- Channel 0/5, span 1 got hangup, cause 102
[May 17 15:44:26] VERBOSE[4471] logger.c:     -- Channel 0/1, span 1 got hangup, cause 102
[May 17 15:44:37] VERBOSE[4472] logger.c:     -- Channel 0/4, span 2 got hangup, cause 102
[May 17 15:46:11] VERBOSE[4472] logger.c:     -- Channel 0/9, span 2 got hangup, cause 102
[May 17 15:46:19] VERBOSE[4472] logger.c:     -- Channel 0/3, span 2 got hangup, cause 102

Tuesday:

[May 18 14:26:17] VERBOSE[4472] logger.c:     -- Channel 0/7, span 2 got hangup, cause 102
[May 18 14:26:24] VERBOSE[4472] logger.c:     -- Channel 0/8, span 2 got hangup, cause 102
[May 18 14:27:14] VERBOSE[4471] logger.c:     -- Channel 0/6, span 1 got hangup, cause 102
[May 18 14:27:45] VERBOSE[4471] logger.c:     -- Channel 0/5, span 1 got hangup, cause 102

Today (so far):

[May 19 11:03:27] VERBOSE[4472] logger.c:     -- Channel 0/6, span 2 got hangup, cause 102
[May 19 11:04:15] VERBOSE[4471] logger.c:     -- Channel 0/2, span 1 got hangup, cause 102
[May 19 11:04:23] VERBOSE[4472] logger.c:     -- Channel 0/5, span 2 got hangup, cause 102
[May 19 11:04:32] VERBOSE[4471] logger.c:     -- Channel 0/3, span 1 got hangup, cause 102




****** ADDITIONAL INFORMATION ******

Asterisk 1.4.31
LibPri 1.4.10.2

Digium, Inc. Wildcard TE205P (rev 02)
2 PRI isdns (one with 30 channels one with 8)

    
> Q931 not sending RELEASE after getting DISCONNECT
> -------------------------------------------------
>
>                 Key: PRI-96
>                 URL: https://issues.asterisk.org/jira/browse/PRI-96
>             Project: LibPRI
>          Issue Type: Bug
>          Components: General
>            Reporter: Russell Brown
>         Attachments: cause102
>
>
> Several times per day, I'm getting Disconnection Cause Codes 102's (recovery on timer expiry) which BT tell me are the result of Asterisk not sending
> a RELEASE in response to a DISCONNECT.
> I'm seeing a handfuls of these in 'bursts' on a system making/taking around 1500 calls per day and while the net result is that we loose a channel for 30 seconds while it times out, these instances are visible to the boss and I'm getting grief over them :-(
> I've turned on pri debug and attached an example of one of these calls and
> it does seem that BT are correct - there's no response sent to the DISCONNECT
> message and the exchange sends a RELEASE with a cause code of 102 some 30 seconds after the original DISCONNECT was sent.
> Curiously the logger always seems to output the same strangeness for the Cause data in these instances like so:-
> {noformat}
> [May 18 14:27:45] VERBOSE[4471] logger.c: < Protocol Discriminator: Q.931 (8)  len=16
> [May 18 14:27:45] VERBOSE[4471] logger.c: < Call Ref: len= 2 (reference 215/0xD7) (Originator)
> [May 18 14:27:45] VERBOSE[4471] logger.c: < Message type: RELEASE (77)
> [May 18 14:27:45] VERBOSE[4471] logger.c: < [08 02 80 90]
> [May 18 14:27:45] VERBOSE[4471] logger.c: < Cause (len= 4) [ Ext: 1  Coding: CCITT (ITU) standard (0)  Spare: 0  Location: User (0)
> [May 18 14:27:45] VERBOSE[4471] logger.c: <                  Ext: 1  Cause: Normal Clearing (16), class = Normal Event (1) ]
> [May 18 14:27:45] VERBOSE[4471] logger.c: < [08 05 82 e6 33 30 35]
> [May 18 14:27:45] VERBOSE[4471] logger.c: < Cause (len= 7) [ Ext: 1  Coding: CCITT (ITU) standard (0)  Spare: 0  Location: Public network serving the local user (2)
> [May 18 14:27:45] VERBOSE[4471] logger.c: <                  Ext: 1  Cause: Recover on timer expiry (102), class = Protocol Error (e.g. unknown message) (6) ]
> [May 18 14:27:45] VERBOSE[4471] logger.c: <              Cause data:[May 18 14:27:45] VERBOSE[4471] logger.c:  33[May 18 14:27:45] VERBOSE[4471] logger.c:  30[May 18 14:27:45] VERBOSE[4471] logger.c:  35[May 18 14:27:45] VERBOSE[4471] logger.c:  (Timer T[May 18 14:27:45] VERBOSE[4471] logger.c: 3[May 18 14:27:45] VERBOSE[4471] logger.c: 0[May 18 14:27:45] VERBOSE[4471] logger.c: 5[May 18 14:27:45] VERBOSE[4471] logger.c: )
> [May 18 14:27:45] VERBOSE[4471] logger.c: -- Processing IE 8 (cs0, Cause)
> [May 18 14:27:45] VERBOSE[4471] logger.c: -- Processing IE 8 (cs0, Cause)
> [May 18 14:27:45] VERBOSE[4471] logger.c: q931.c:3801 q931_receive: call 215 on channel 5 enters state 0 (Null)
> [May 18 14:27:45] VERBOSE[4471] logger.c:     -- Channel 0/5, span 1 got hangup, cause 102
> {noformat}
> If it's any help, here's the timing of the instances this week (grep 'cause 102' /var/log/asterisk/full.X) so you can see the clumping:
> Monday:
> {noformat}
> [May 17 09:41:38] VERBOSE[4472] logger.c:     -- Channel 0/7, span 2 got hangup, cause 102
> [May 17 15:44:21] VERBOSE[4471] logger.c:     -- Channel 0/5, span 1 got hangup, cause 102
> [May 17 15:44:26] VERBOSE[4471] logger.c:     -- Channel 0/1, span 1 got hangup, cause 102
> [May 17 15:44:37] VERBOSE[4472] logger.c:     -- Channel 0/4, span 2 got hangup, cause 102
> [May 17 15:46:11] VERBOSE[4472] logger.c:     -- Channel 0/9, span 2 got hangup, cause 102
> [May 17 15:46:19] VERBOSE[4472] logger.c:     -- Channel 0/3, span 2 got hangup, cause 102
> {noformat}
> Tuesday:
> {noformat}
> [May 18 14:26:17] VERBOSE[4472] logger.c:     -- Channel 0/7, span 2 got hangup, cause 102
> [May 18 14:26:24] VERBOSE[4472] logger.c:     -- Channel 0/8, span 2 got hangup, cause 102
> [May 18 14:27:14] VERBOSE[4471] logger.c:     -- Channel 0/6, span 1 got hangup, cause 102
> [May 18 14:27:45] VERBOSE[4471] logger.c:     -- Channel 0/5, span 1 got hangup, cause 102
> {noformat}
> Today (so far):
> {noformat}
> [May 19 11:03:27] VERBOSE[4472] logger.c:     -- Channel 0/6, span 2 got hangup, cause 102
> [May 19 11:04:15] VERBOSE[4471] logger.c:     -- Channel 0/2, span 1 got hangup, cause 102
> [May 19 11:04:23] VERBOSE[4472] logger.c:     -- Channel 0/5, span 2 got hangup, cause 102
> [May 19 11:04:32] VERBOSE[4471] logger.c:     -- Channel 0/3, span 1 got hangup, cause 102
> {noformat}
> ****** ADDITIONAL INFORMATION ******
> Asterisk 1.4.31
> LibPri 1.4.10.2
> Digium, Inc. Wildcard TE205P (rev 02)
> 2 PRI isdns (one with 30 channels one with 8)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira



More information about the asterisk-bugs mailing list