Rtp Timeout disconnects after 31 seconds

Hello, all of my outgoing calls suddenly are disconnecting after 31 seconds and I cannot figure out why, incoming calls from provider works as expected

The disconnect cause I’m getting is “Rtp Timeout”, always from one of the 2 SEMS nodes I have. Disconnect initiator is always “1 - Traffic switch”

My instant guess is that it could have to do with NAT since SIP and NAT aren’t exactly best friends, but we haven’t changed anything in our network for a couple of months, and this started appearing yesterday. We also have everything on public static IP addresses

Is there anyone who could help with this issue?

EDIT: I forgot to add, RTP actually works for 30 seconds, party A can hear party B and vice versa

you have to collect pcap trace for such call. Then it will be possible do say what happens.

Hello, thanks for reply. cannot upload file here since I’m a new user.
https://ufile.io/1asltiuv

.238 is load balancer (kamailio load balancing to .235 and another SEMS node), .235 is SEMS node, .35 is provider
.45 is another load balancer for our customers side running kamailio, and .41 is one of the handling nodes also running kamailio

We need trace with dump level = capture all, where RTP present

https://ufile.io/f/8ingx

2 files, one with rtptimeout and one without

every once in a while we get a call that actually works now, and then it’s normal RTP stream.

the one without RTP timeout I said “testing 1” on a phone from our side, and “testing 2” on a phone from provider side. I heard both on both phones, but as you can hear our servers captured a completely different signal. Never heard this before, sounded somewhat like a horror movie

rtptimeout-butrtpworksbothways.pcap - there are no RTP packets in this trace. Looks like it captured with dump level = capture signaling traffic or yeti really not receiving RTP there( then RTP timeout is proper disconnect reason)

rtpworksweirdnoises.pcap - there is one way RTP: Originator(217.10.104.218)->Yeti(.235) → termination gw (.35)

try to upload files there. It should be allowed now

I made 2 tcpdump on the SEMS node instead of relying on the dumps provided by yeti. Seems like what you’re saying is correct that the SEMS node doesn’t isn’t capturing RTP. for some of the calls, making it close the session because of lack of RTP traffic, and that SEMS is only capturing RTP from within our customer network in the calls that work. Why would this happen? Incorrectly configured RTP relaying?

Uploaded 2 new files taken with above said tcpdump on the SEMS machine
tcpdump rtp timeout filtered.pcap (13.2 KB)
tcpdump working call filtered.pcap (1.7 MB)

After taking a look myself at these dumps, I noticed that the main difference is which IP we send the packets to our provider. They have 2 IP-addresses, .67 and .35, and yeti load balances between them.

When the calls go through .35 they more often than not get RTP timeout. I will talk to our provider and see why this may be the cause

However, it’s still weird that only outgoing calls from us are affected. I have disabled.35 in YETI and reply if I still need help. As of now everything looks stable

Could you provide trace captured by yeti?

yeti captured rtp timeout.pcap (12.9 KB)
yeti captured working call.pcap (1.6 MB)

the same calls as above but captured from yeti

yeti captured working call.pcap - two way RTP, everything looks ok
yeti captured rtp timeout.pcap - no rtp at all. The only mystery there is your statement that audio is OK from customer point of view. I think you are wrong there.

recommendations:

  1. try to enable RTP ping on termination and origination gateways. sometimes transit systems(your customer and vendor) not sending RTP until receiving first packed. RTP ping from yeti may solve this problem.
  2. upgrade your system. 1.7.63 was released ages ago.

if you need help with upgrade ping me in DM or chat