Archive for February, 2014

I came across an interesting issue for a Swiss customer where they were having problem with Call-forward to an internal extension on their CME systems.

Call-forward to an internal extension seems quite straight-forward so I checked how telephony-service is configured and found both call-forward patterns as well as transfer-pattern were configured as .T .. so no real issue there. Customer added that it use to work fine before but not working anymore. After asking few questions I found that they recently migrated from ISDN to a SIP provider so all incoming calls are coming on SIP while I was thinking it’s all ISDN.

Ran our usual ccsip all, dial-peer all, ccapi inout debugs and found that when call is getting forwarded from 608 to 612, we are getting “SIP/2.0 302 Moved Temporarily” and then the call was getting disconnected.

.

SIP/2.0 302 Moved Temporarily
Via: SIP/2.0/UDP 62.2.x.x:5060;branch=z9hG4bKtjjtl830982hmqr9f6u0.1
From: 076XXXXXX<sip:0762757666@vssOTF012.cablecom.net:5060;user=phone>;tag=1680239575
To: 608<sip:0442296008@cableXXX:5060;user=phone>;tag=BA4F3C-6D2
Date: Mon, 24 Feb 2014 12:06:50 GMT
Call-ID: 7692dde2-11ccfac8-1ce04e37-1bb8@cableXXX
CSeq: 1 INVITE
Allow-Events: telephone-event
Server: Cisco-SIPGateway/IOS-12.x
Diversion: ;reason=no-answer;counter=1
Contact: <sip:612@192.168.x.x>
Content-Length: 0

#

When a call comes in on an SIP trunk and gets forwarded (CFNA / CFB / CFA), then the default behavior is for the CME to send the 302 “Moved Temporarily” SIP message to the Service Provider (SP) proxy. Sometimes provider may not support this and that was the case in this problem. Same issue is for Transfer as by default CME sends a SIP REFER message to Service Provider (SP) proxy and in this case SP was not able to handle it well.

I decided to configure the following commands before going to carrier as I was trying to disable this default behavior of CME.

!
voice service voip
no supplementary-service sip refer
no supplementary-service sip moved-temporarily
!
!

This fixed the issue without logging any ticket with carrier as I changed the behavior of CME.

Now when call is forwarded to 612 I was getting this instead of 302 Moved Temp.

SIP/2.0 180 Ringing
Via: SIP/2.0/UDP 62.2.X.X:5060;branch=z9hG4bKsn1fs200c8g1qq3af7f1.1
From: 072XXXXXX <sip:072XXXXXX@CableXXX:5060;user=phone>;tag=1247061884
To: 608 <sip:608@vbcOTF005.cablecom.net:5060;user=phone>;tag=12BA304-1D1B
Date: Mon, 24 Feb 2014 14:10:38 GMT
Call-ID: 5e59c683-7fa02bdf-214b80c1-5e8d@CableXXX
CSeq: 1 INVITE
Allow: INVITE, OPTIONS, BYE, CANCEL, ACK, PRACK, UPDATE, REFER, SUBSCRIBE, NOTIFY, INFO, REGISTER
Allow-Events: telephone-event
Remote-Party-ID: “Michele 612” <sip:612@192.168.X.X>;party=called;screen=no;privacy=off
Contact: <sip:608@192.168.X.X:5060>
Server: Cisco-SIPGateway/IOS-12.x
Content-Length: 0

Advertisements

Few weeks back I came across a issue for a customer who are on CUCM 9.12() cluster.

The issue was outbound calls not reaching either of the MGCP gateways from a Translation pattern even though both gateways were registered fine with Call manager.

I checked the E1s and MGCP status and all was OK. The Route List was reset as well but had no joy. I also did a Reset on MGCP quite few times but had no luck.

During test calls I found this in CCM logs:

93958336.006 |18:10:05.348 |AppInfo  |RouteListCdrc::null0_CcSetupReq – Selecting a device.

93958336.007 |18:10:05.348 |AppInfo  |RouteListCdrc::selectDevices — mTemporaryDeviceInfoList.size = 1.

93958336.008 |18:10:05.348 |AppInfo  |RouteListCdrc::null0_CcSetupReq: Execute a route action.

93958336.009 |18:10:05.348 |AppInfo  |RouteListCdrc::algorithmCategorization — CDRC_SERIAL_DISTRIBUTION type=1

93958336.010 |18:10:05.348 |AppInfo  |RouteListCdrc::whichAction — DOWN (Current Group) = 1

93958336.011 |18:10:05.348 |AppInfo  |RouteListCdrc::routeAction — current device name=a75fd367-37e5-e702-9336-505837d6fc48, down

93958336.012 |18:10:05.348 |AppInfo  |RouteListCdrc::executeRouteAction: SKIP_TO_NEXT_MEMBER

93958336.013 |18:10:05.348 |AppInfo  |RouteListCdrc::skipToNextMember

.

.

.

93958337.005 |18:10:05.348 |AppInfo  |RouteListCdrc::algorithmCategorization — CDRC_SERIAL_DISTRIBUTION type=1

93958337.006 |18:10:05.348 |AppInfo  |RouteListCdrc::whichAction — DOWN (Current Group) = 1

93958337.007 |18:10:05.348 |AppInfo  |RouteListCdrc::routeAction — current device name=64b7ded7-1c4b-87c4-dbde-dd931577cd99, down

93958337.008 |18:10:05.348 |AppInfo  |RouteListCdrc::executeRouteAction: SKIP_TO_NEXT_MEMBER

93958337.009 |18:10:05.348 |AppInfo  |RouteListCdrc::skipToNextMember

.

.

.

93958338.001 |18:10:05.348 |AppInfo  |RouteListCdrc::null0_CcSetupReq check vipr call mViprReroute=0 mViprAlreadyAttempt=0 CI=35154630 BRANCH=0

93958338.002 |18:10:05.348 |AppInfo  |RouteListCdrc::null0_CcSetupReq – Terminating a call after the RouteListCdrc cannot find any more device.

93958338.003 |18:10:05.348 |AppInfo  |RouteListCdrc::terminateCall – No more Routes in RouteListName = RL_LIP.  Rejecting the call

93958338.004 |18:10:05.348 |AppInfo  |RouteListCdrc::terminateCall – Sending CcRejInd, with the cause code (41), to RouteListControl because all devices are busy/stopped.

93958338.005 |18:10:05.348 |AppInfo  |GenAlarm: AlarmName = RouteListExhausted, subFac = CALLMANAGERKeyParam = , severity = 4, AlarmMsg = RouteListName : RL_LIP, Reason=41,

.

#

This shows the problem. For some reason both gateways appeared to be either stopped or busy to Call manager even though they were available with full 30 channels.

I searched for bugs but could not find any for CUCM 9.1(2) and RL problem. Tried to change order of E1s in RG but that did not help as well.

The issue was only resolved when I went for a cluster reboot. It seems like the Provisioning service sometimes is not updated properly and get stuck.

Just today I found that Cisco has in fact added this as a bug (Feb 7th) on their Bug Toolkit. Even though they mention Reset of RL/RG will resolve the issue, it did not for me.

This is just a heads up for you guys if you come across similar issue for a 9.1(2) cluster. You may try resetting RL/RG but cluster reboot will definitely resolve the issue.

Bug Id is CSCum85086 -Outbound calls through RL failing, RG members reported as down