Jump to content

Rick Guyton

Moderators
  • Posts

    627
  • Joined

  • Days Won

    41

Rick Guyton last won the day on November 17 2020

Rick Guyton had the most liked content!

3 Followers

Recent Profile Visitors

3,865 profile views

Rick Guyton's Achievements

Newbie

Newbie (1/14)

  • Week One Done
  • One Month Later
  • One Year In

Recent Badges

87

Reputation

4

Community Answers

  1. Few things, FYI Macs and local IPs for phone are available in the debugging app. No need to broadcast scan and ARP. I use this all the time to access phone WebUI while remoting into a client PC. BTW simple-help.com is a great cost effective self hosted remote access service. Also, provisioner can reboot phones easily. The real issue with shipping routers is you have to be 100% right or you are screwed. Also, sometimes you need to setup PPPoE bridging to avoid double NAT. It’s a mess, if you are going to offer a router you need someone onsite.
  2. A lot of what @Karl Stallknecht is saying is very true. At the end of the day you need three things. A good LAN, solid connectivity and sufficient bandwidth. A Good LAN: Well, this is 50% of the battle TBH. At the end of the day if the LAN sucks, so will the phones. Grab a VLAN if you can get it. (remarkably hard with many IT people for some reason) But, more than anything you need good IT. If you are targeting SMBs that means finding local IT folks or doing the IT for them. (I think Karl does his customer's IT) Good IT folks will usually be comfrotable with Cisco (not Meraki, but Cisco IOS), mikrotik, palo alto or juniper equipment. All they REALLY need to know is basics like jitter, packet loss, latency, and VLANing. But, most people who are proficient in this stuff aren't using netgear. Solid Connectivity: We use PingPlotter extensivly for this. I like to run continuous pings to a couple of the phones, the router, cable modem (192.168.100.1 typically), ISP gateway, 1.0.0.1, 8.8.8.8 and 4.2.2.2. This will give you a VERY accurate picture of jitter, PL, latency OVER TIME. And, tell you where it's starting. For instance if you are getting packet loss to the phones, there's LAN issues. To the router, LAN issues or bad router. To the modem bad modem. To the GW, bad circuit. GW ok, but others bad, peering issues with ISP. I'll roll this in advance if I'm able and get these issues resolved before deploying if I can. Bandwidth: Nope, 1.5Mbps ADSL isn't going to cut it... There's two ways to handle this. Either have them buy way more than they'll ever use (most people do this) or setup bandwidth management. I use Mikrotik routers personally wherever I can to do the latter. So, to your question on how to prevent these issues, figure out a way to cover the above bases. Top of the list should be partnering with good IT folks. To your question about how does everyone else do it? Well... Most go Karl's route and do the IT too. Or they don't mess with it. They sell to everyone and take the 80%. The 20% will get tech support that basically just points the finger at IT (I mean, they aren't wrong really). The vast majority or SMBs, especially with the rise of cloud services, don't even have any IT support AT ALL so they will deal with the occasional bad quality or move on to a land line provider. The remainder will blame their IT folks until they figure it out or do the same.
  3. Ok, last update and I'm leaving this be. The timestamp in the msg_id is down to the microsecond. So, I really don't need the seperate timestamp field on top of it. Messages can be de-duped with routing key + msg_id
  4. Ugh, just in case anyone is following along... from_tag and to_tag eventually stay the same. It differs between hold/unhold #1 and hold/unhold #2. But, not thereafter. I'll be using both msg_id and timestamp to de-dup with. This leaves open the possibility that if someone managed to hold/unhold/hold again within 1 second that it'd throw off my stats. But, I feel like that'd be getting pretty edge case. I think AQMP had internal IDs it'd be nice to get at least a hash of AQMP's internal ID to de-dup with. But, this is the best there is for the moment as far as I can see.
  5. Awesome as always @mc_ thanks! I'm going to concat all the things!!!
  6. Hey @mc_, do you know what "from_tag" is? This also seems to be unique across different hold events. Also, seems like you are right, the msg_id does appear to be a timestamp
  7. I've got an agent in python that connects, subscribes to one or more accounts (and sub-accounts), records events and dumps them out to a file. It then uploads to backblaze storage for processing by another agent. I'll be open sourcing that part for sure most likely before the end of the year. If you could use that, I'd hate for you to duplicate your efforts.
  8. NP, if you haven't started on the websocket code code yet and python is acceptable to you, you might want to wait a bit before diving in. πŸ˜‰
  9. So, you are listening on the AQMP. So regardless of the origin, it should come up. Are you looking to verify that this is the case? Or are you trying to differentiate between hold on Op Console vs physical phone holds? EDIT: my example was done on a physical phone
  10. I'll concat msg_id and the routing key just to be safe. That'll do it I think. Thanks a bunch @mc_!
  11. @FASTDEVICE Neither? AFAIK connecting to WS is equivalent to listening to the AQMP. So effectively you are seeing ecallmgr internally reporting to kazoo apps that a hold event has occurred after freeswitch told it so. I'm not 100% sure about that TBH. But, here's an example of a hold straight off the wire if it helps (censored like crazy obviously) {"action": "event", "subscribed_key": "call.*.*", "subscription_key": "call.MY_ACCOUNT_ID_HERE.CHANNEL_HOLD.*", "name": "CHANNEL_HOLD", "routing_key": "call.MY_ACCOUNT_ID_HERE.CHANNEL_HOLD.CALL_ID_HERE", "data": {"to_tag": "NOT_SURE_IF_THIS_IS_SENSITIVE", "timestamp": 63743922729, "switch_url": "sip:mod_sofia@SOME_IP:11000", "switch_uri": "sip:SOME_IP:11000", "switch_nodename": "freeswitch@fs003.ord.p.zswitch.net", "switch_hostname": "fs003.ord.p.zswitch.net", "presence_id": "MY_EXT@MY_REALM", "other_leg_direction": "inbound", "other_leg_destination_number": "+MY_PHONE_NUM", "other_leg_caller_id_number": "+MY_CELL_PHONE_NUM", "other_leg_caller_id_name": "Rich Guyton Iii", "other_leg_call_id": "OTHER_CALL_LED_ID", "media_server": "fs003.ord.p.zswitch.net", "from_tag": "NOT_SURE_IF_THIS_IS_SENSITIVE", "disposition": "ANSWER", "custom_sip_headers": {"x_kazoo_invite_format": "contact", "x_kazoo_aor": "sip:MY_USER_NAME@MY_REALM"}, "custom_channel_vars": {"account_id": "MY_ACCOUNT_ID_HERE", "authorizing_id": "NOT_SURE_IF_THIS_IS_SENSITIVE", "authorizing_type": "device", "bridge_id": "OTHER_CALL_LED_ID", "call_interaction_id": "INTERACTION_ID", "channel_authorized": "true", "ecallmgr_node": "ecallmgr@apps002.ord.p.zswitch.net", "global_resource": "false", "inception": "+MY_PHONE_NUM@SOME_IP", "owner_id": "MY_OWNER_ID", "realm": "MY_REALM", "username": "MY_USER_NAME"}, "custom_application_vars": {}, "channel_state": "EXCHANGE_MEDIA", "channel_name": "sofia/sipinterface_1/MY_USER_NAME@MY_REALM", "channel_created_time": 1576703524752278, "channel_call_state": "HELD", "caller_id_number": "+MY_CELL_PHONE_NUM", "caller_id_name": "CleaRing - Rich Guyton Iii", "callee_id_number": "+MY_PHONE_NUM", "callee_id_name": "CleaRing", "call_direction": "outbound", "call_id": "CALL_ID_HERE", "msg_id": "NOT_SURE_IF_THIS_IS_SENSITIVE", "event_name": "CHANNEL_HOLD", "event_category": "call_event", "app_version": "4.0.0", "app_name": "ecallmgr"}}
  12. I'm developing a reporting app and as part of it, I have two "agent" servers connecting to my Kazoo API using websockets and listening to events. They each listen for 30 minutes before closing and relaunching. "agent 1" starts on every 0 and 30 minute and "agent 2" starts on every 15 and 45 minute of the hour. This way, I have some redundancy. This obviously leaves me with duplicate messages though and I need a way to de-dup them. I have been using the routing key and this worked really well until I tried to do reporting on call hold time when the calls were held/unheld multiple times in a session. I've looked further into it and it seems like the key I should have been pulling is msg_id. But, before I invest the time to re-write my code to use this key to de-dup my messages, I want to be sure this would work as expected. Where you at @mc_? πŸ˜€
  13. Hey Karl! Glad I could help out! I had a feeling a few others probably just gave root creds...
  14. Hi all! I'm sure no one else has done this... But, to get things done, I initially setup a few customer accounts with root AWS access keys to get their call recording going. Needless to say, that's super dangerous. So, I recently invested the time to find the minimal possible permissions to provision an account with AWS. And I thought I might as well share. This assumes you will be assigning each customer a separate bucket. Technically, you could put all your clients into a single bucket. But, that makes the permissions much harder. So, here's the step by step directions. They look really long, but it really is very easy, these are just very detailed instructions: SETUP AN S3 BUCKET 1) Log into your AWS portal and access the S3 app 2) Click Create Bucket 3) Enter a new bucketname. Doesn't matter what it is, but write it down somewhere 4) US West (N. California) for your region 5) Next through the remaining panes and create the bucket. You should read through them and make sure that meet your needs. I especially recommend enabling the "Block ALL public access" option. SETUP AN IAM USER 1) Access the IAM app 2) Click Add User 3) Enter your a new username 4) Check Programmatic access 5) In the next pane, select Attach existing policies directly and then select create policy 6) This will open a new tab for you to enter your policy into. Click the JSON tab and enter this and replace "BUCKET_NAME_HERE" with your bucket name from above. Then, click review policy. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:PutObject", "s3:GetObject" ], "Resource": [ "arn:aws:s3:::BUCKET_NAME_HERE/*", "arn:aws:s3:::BUCKET_NAME_HERE" ] } ] } 7) Name your policy and click create policy 8 ) Back on you IAM tab, click refresh, enter the name of the policy in the search you assigned in step 7, check it and press next 9) The next two pages are for tagging and review, you can just leave them blank and click create user. 10) On the next page, you will get you access key and secret access key. SAVE THESE! You need them to input into your connector 11) Back in the main page for IAM, click Users, and click on your user account. Save the ARN shown BUCKET POLICY 1) Go back into your S3 app and click on your bucket 2) Click permissions, then bucket policy and enter this JSON. Update your bucket name and ARN. Then save. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "ARN_FOR_IAM_USER_HERE" }, "Action": [ "s3:GetObject", "s3:ListBucket", "s3:PutObject" ], "Resource": [ "arn:aws:s3:::BUCKET_NAME_HERE/*", "arn:aws:s3:::BUCKET_NAME_HERE" ] } ] } AWS APP In Kazoo 1) Now, just enter your AWS info as collected above. If you used the region I recommended above, your host is s3-us-west-1.amazonaws.com. Thats it! If anyone has any feedback, I'd love to hear. I hope you all find it useful!
  15. Hi Shabbir, In my experience it's really more about managing your bandwidth locally than relying on the ISP to handle it for you via QOS tagging. Basically what you do is self-restrict your bandwidth and then shape it. So, if you have a 25/5 connection and you tell the router to only ever use 24/4, you can control what comes into/out of the router first. QOS tagging is nice if your ISP supports it. So far, the only carrier that I've seen support it well is CenturyLink though. I've seen really great results using bandwidth management via PfSense, MikroTik and SonicWall systems. (Careful with sonicwall though, the old stuff HATES on SIP) I've had my IT partners implement Meraki and straight Cisco routers with great success as well. Though I've never personally set them up. PfSense is probably the easiest IMO. They sell their own routers now and there's a step by step wizard to configuring them. Their entry price point prevents me from deploying them. I think their least expensive is over $500 last I checked and that's just too much for me to standardize on. MikroTik is AMAZING. But... TBH they are a bit like drinking out of a firehose. A firehose with poor docs. Master them and the world of networking is your oyster though. They are no BS ISP grade hardware. And they have SoHo routers starting under $100 and nothing over $350 (except Cloud core and those are overkill for anything less than ISP/datacenter work). I have a script on here to help you get going, but it needs updates. Let me know if you are interested and I'll polish it up for you. Many of my IT partners can do just it all themselves now, so I haven't had the impetus to update it. SonicWalls do this as well. But you should be aware that some of the older sonicwalls do terrible terrible things to SIP. New stuff seems pretty good, though I would recommend disabling SIP ALG on them. Meraki are great and pretty easy to configure form what I've seen. But you pay for it on a subscription model. From what I understand if you stop paying the subscription, the router bricks. Youch... Not my style, but people have great success with them. Cisco, well, there's a saying "Nobody gets fired for going with Cisco". And yea, they are the 10,000 pound gorilla, have crazy good name recognition and are rock solid. But, you will pay for that badge and for the consultant to configure it for you...
Γ—
Γ—
  • Create New...