Travis Hawkins Posted July 11, 2018 Report Posted July 11, 2018 All- We're located in the midwest and we (along with all of our clients) are having major call quality, mostly one-way audio issues today on the hosted platform. We've opened a ticket and I'm on with support right now but they said they don't see anything wrong so far and they've had no other reports. Is anyone else experiencing issues? We've had reports from about a 300 mile radius thus far and it appears to be effecting every single one of our customers. We're getting no audio on about 9/10 calls and occasionally a call will work, but the audio still sounds horrible when it does come through. Just curious if anyone else is seeing this? Thanks. Travis
atmosphere617 Posted July 11, 2018 Report Posted July 11, 2018 Also in the midwest but hosting our own systems. No issues here, Have you looked at any packet captures?
Brian Dunne Posted July 11, 2018 Report Posted July 11, 2018 No issues for us on the hosted side. Are you having issues calling locally (ext-to-ext)? That could rule out your SIP trunking provider, which is where I'd point the finger if calls not requiring the PSTN are working fine. Could also be a regional ISP/backbone hit, though there's nothing obvious on downdetector.net like the Level 3/Comcast fiber cut a couple weeks back.
Travis Hawkins Posted July 11, 2018 Author Report Posted July 11, 2018 @Brian Dunne Good question on the PSTN side, but yes it's happening on internal ext-to-ext calls in our office as well. From what I can tell we're having no connectivity issues to any of the proxies so I don't *think* it's a Level 3 issue, but I can't confirm that of course.
Travis Hawkins Posted July 11, 2018 Author Report Posted July 11, 2018 @atmosphere617 I haven't performed any captures yet as I've been working directly with 2600hz support thus far, but they still haven't found anything. I'll start investigating more internally now, I was really just curious if anyone else on hosted in the Midwest was seeing issues but thanks for replying and I'll see what other troubleshooting I can do here.
Tuly Posted July 11, 2018 Report Posted July 11, 2018 we also see higher ping time to 2600HZ east servers and about %1 packet loos,
Administrators Darren Schreiber Posted July 11, 2018 Administrators Report Posted July 11, 2018 An update on this. This morning we received this complaint (posted by the OP who started this post). We can replicate 100% packet loss to some of their clients over Cogent but everything works OK over Level3. Around 11am PT we tried a workaround to route to (roughly) their block of IPs via Level3 only, but it seems to have caused issues with OTHER clients, so we've undone that change as of 1pm PT today. At this point: * If you had call quality complaints between 11am and 1pm PT (roughly), they are probably related to the workaround we attempted for the poster of the original topic here, and likely are resolved. * The original issue resolved by the poster of this topic is still ongoing but appears isolated to their customers for some reason. We are still trying to determine why. The packets make it to the provider (Suddenlink) but apparently have heavy packet loss past entry into Suddenlink. So, to be clear: Tuly, your issue is probably already fixed. Travis, your issue is probably still a peering issue at Suddenlink. We're looking for workarounds to avoid this but the fastest/easiest way to resolve this is likely to change the DIDs and proxies to use SJC instead. We believe this issue is regional to you.
Rick Guyton Posted July 12, 2018 Report Posted July 12, 2018 (edited) Hey Travis, We've been seeing a lot of this too in AZ. I'm trying not to have a tin foil hat over here. But it started after net neutrality started to get knocked around for us... At the end of the day, we use Queue Trees on our MikroTik routers and I just have to limit down the max throughput on them. On a 25/5 connection I used to set a max of 24/4 throttle to insure we controlled prio. But more and more I have do go down to 23/3. 😕 I'm really hoping some day we'll get a definitive process for trouble shooting call quality issues from 2600... I understand it's a complex issue. But it'd sure be nice to know what they expect us to do on our side before going to them. Edited July 12, 2018 by Rick Guyton (see edit history)
Administrators Darren Schreiber Posted July 12, 2018 Administrators Report Posted July 12, 2018 @Rick Guyton I think your response is more on a chronic issue, where as Travis's issue is that suddenly today all of clients started complaining at the same time. From our side, we saw 100% packet loss to his customers from ORD. I don't think your issue is the same in AZ frankly. The symptoms and timeframes don't match.
Rick Guyton Posted July 12, 2018 Report Posted July 12, 2018 1 hour ago, Darren Schreiber said: @Rick Guyton I think your response is more on a chronic issue, where as Travis's issue is that suddenly today all of clients started complaining at the same time. From our side, we saw 100% packet loss to his customers from ORD. I don't think your issue is the same in AZ frankly. The symptoms and timeframes don't match. Yea, probably right. Mis-read the original post I guess. I'm thinking mine is more of an ongoing local issue with Cox. Thought it might be realted
fhill Posted July 23, 2018 Report Posted July 23, 2018 @Travis Hawkins Has your issue been resolved? If so, what was the cause?
Travis Hawkins Posted August 15, 2018 Author Report Posted August 15, 2018 @fhill Sorry I haven't been on the forums lately but yes our issue was resolved the next day. Apparently there was an issue at a major carrier exchange in Kansas City, MO that was effecting traffic throughout the midwest. It was basically a two-way routing issue where the Internet traffic was taking one route out, but the traffic was trying to take a different route back. It caused all kinds of issues, but voice was obviously the easiest to detect since it's all real-time and browser sessions were able to wait and resolve but it just made the Internet seem a little "slow" to end users. Three of the local IPS all had to work with the regional exchange to resolve the issue, but I never heard the root cause.
Recommended Posts