Jump to content

All Activity

This stream auto-updates

  1. Earlier
  2. No need to search for other occurrences of Iterator() — unless you have custom stuff that uses it, the Trunkstore view is the only place it appears in the Kazoo v4.3 codebase. Nice replication script—I use a simple one written in Bash but yours is more informative. One thing you can do is add some delay (minutes) between replication requests which can keep from overwhelming a server. I’ve found that if I just machine-gun style replicate a bunch of DBs sometimes RAM and CPU get slammed. Also, if worse comes to worse and you have too many issues letting Couch replicate itself, what you can do is a manual clone—in other words instead of writing a script to tell Couch what DBs to replicate, you write something that pulls each doc individually and creates it on the target machine. This way takes longer of course, but can be useful if there’s corrupted data or some other issue preventing a normal replication from succeeding fully, and you also get an internal reset of revisions on the target, since although the data is the same, it is a “new” document on the target side, and doesn’t have all the previous revision tombstones hanging around. Also, although you aren’t doing this, for the benefit of other readers, you can upgrade Bigcouch to CouchDB v3, BUT you must first upgrade to v2, and then from there to v3.
  3. We're in the process of migrating our database cluster from BigCouch to CouchDB 3.x and wanted to create a thread to document the changes required to keep a Kazoo 4.x cluster running and migration issues we've experienced. When running a replicate from BigCouch to CouchDB, we're getting a number of dropped connections that result in partially transferred databases. Re-running the replication will migrate more and more documents over until the replication is ultimately successful so we wrote a script to perform this (c# - see below). We've confirmed there are no network or firewall issues between clusters and they're even on the same subnet. Regardless, this script attached worked for us. using System.Net.Http.Json; using System.Web; using Newtonsoft.Json; using Microsoft.Extensions.DependencyInjection; using Microsoft.Extensions.Logging; using NLog; using NLog.Config; using NLog.Targets; using NLog.Extensions.Logging; using NLog.Conditions; class Program { const string SourceHost = "x.x.x.x"; const string TargetHost = "x.x.x.x"; const string TargetUser = "username"; const string TargetPass = "password"; const int MaxReplicationAttempts = 50; static async Task<int> Main(string[] args) { // Set up a service collection var serviceCollection = new ServiceCollection(); ConfigureServices(serviceCollection); // Build the service provider var serviceProvider = serviceCollection.BuildServiceProvider(); // Get the logger var logger = serviceProvider.GetRequiredService<ILogger<Program>>(); var ListOfDatabases = new List<string>(); using (var httpClient = new HttpClient()) { try { var response = await httpClient.GetAsync($"http://{SourceHost}:5984/_all_dbs"); if (!response.IsSuccessStatusCode) { return 1; } var jsonString = await response.Content.ReadAsStringAsync(); ListOfDatabases = JsonConvert.DeserializeObject<List<string>>(jsonString) ?? []; logger.LogInformation("Received {dbCount} database strings", ListOfDatabases.Count); } catch (Exception ex) { logger.LogError(ex, "Error getting databases from source host: {sourceHost}", SourceHost); return 1; } } ListOfDatabases.Remove("_users"); ListOfDatabases.Remove("_replicator"); ListOfDatabases.Remove("_global_changes"); List<string> FailedReplications = []; int SuccessCount = 0; int FailureCount = 0; foreach (var Database in ListOfDatabases) { logger.LogInformation("----------------------------------------\n"); var handler = new HttpClientHandler { ServerCertificateCustomValidationCallback = (sender, cert, chain, sslPolicyErrors) => true }; using var httpClient = new HttpClient(handler); httpClient.DefaultRequestHeaders.UserAgent.ParseAdd("CouchDB-Replication-Tool/1.0"); // Add Basic Authentication header var authString = $"{TargetUser}:{TargetPass}"; var base64Auth = Convert.ToBase64String(System.Text.Encoding.ASCII.GetBytes(authString)); httpClient.DefaultRequestHeaders.Authorization = new System.Net.Http.Headers.AuthenticationHeaderValue("Basic", base64Auth); // Set the Accept header to accept all types httpClient.DefaultRequestHeaders.Accept.Add( new System.Net.Http.Headers.MediaTypeWithQualityHeaderValue("*/*")); var replicateUrl = $"http://{TargetHost}:5984/_replicate"; // Remove credentials from URL var DatabaseEncoded = HttpUtility.UrlEncode(Database); var replicationRequest = new { source = $"http://{SourceHost}:5984/{DatabaseEncoded}", target = $"http://{TargetUser}:{TargetPass}@{TargetHost}:5984/{DatabaseEncoded}", create_target = true, }; bool ReplicationComleted = false; logger.LogInformation("Starting Replication of {Database} database from {SourceHost} to {TargetHost}", Database, SourceHost, TargetHost); var jsonContent = JsonConvert.SerializeObject(replicationRequest); var content = new StringContent(jsonContent, System.Text.Encoding.UTF8, "application/json"); content.Headers.ContentLength = jsonContent.Length; int Attempts = 0; while(ReplicationComleted == false) { Attempts++; try { var response = await httpClient.PostAsync(replicateUrl, content); logger.LogInformation("Replicate request returned: {status}", response.ReasonPhrase); if (!response.IsSuccessStatusCode) { logger.LogWarning("Replicate request failed for {Database}", Database); await Task.Delay(TimeSpan.FromSeconds(15)); // Add a 1 second delay between replication attempts } else { SuccessCount++; ReplicationComleted = true; } } catch (Exception ex) { FailureCount++; FailedReplications.Add(Database); logger.LogError(ex, "Error replicating {Database}", Database); break; } if(Attempts >= MaxReplicationAttempts) { FailureCount++; FailedReplications.Add(Database); logger.LogError("Max replication attempts reached for {Database}", Database); break; } } logger.LogInformation("Done replicating {Database}", Database); } logger.LogInformation("Replication completed with {successCount} successes and {failureCount} failures", SuccessCount, FailureCount); if(FailedReplications.Count > 0) { logger.LogWarning("Failed to replicate the following databases: {failedReplications}", string.Join("\n", FailedReplications)); } return 0; } static void ConfigureServices(IServiceCollection services) { var config = new LoggingConfiguration(); // Console Target var consoleTarget = new ColoredConsoleTarget("console") { Layout = "${level:uppercase=true}|${message} ${exception:format=tostring}", EnableAnsiOutput = true, }; consoleTarget.WordHighlightingRules.Add(new ConsoleWordHighlightingRule { Text = "INFO", Condition = ConditionParser.ParseExpression("level = LogLevel.Info"), ForegroundColor = ConsoleOutputColor.Green, IgnoreCase = false, WholeWords = true }); consoleTarget.WordHighlightingRules.Add(new ConsoleWordHighlightingRule { Text = "ERROR", Condition = ConditionParser.ParseExpression("level = LogLevel.Error"), ForegroundColor = ConsoleOutputColor.Red, IgnoreCase = false, WholeWords = true }); consoleTarget.WordHighlightingRules.Add(new ConsoleWordHighlightingRule { Text = "WARN", Condition = ConditionParser.ParseExpression("level = LogLevel.Warn"), ForegroundColor = ConsoleOutputColor.Yellow, IgnoreCase = false, WholeWords = true }); /* // Set colors for different log levels consoleTarget.RowHighlightingRules.Add(new ConsoleRowHighlightingRule( condition: ConditionParser.ParseExpression("level == LogLevel.Info"), foregroundColor: ConsoleOutputColor.Green, backgroundColor: ConsoleOutputColor.White)); consoleTarget.RowHighlightingRules.Add(new ConsoleRowHighlightingRule( condition: ConditionParser.ParseExpression("level == LogLevel.Error"), foregroundColor: ConsoleOutputColor.Red, backgroundColor: ConsoleOutputColor.White)); consoleTarget.RowHighlightingRules.Add(new ConsoleRowHighlightingRule( condition: ConditionParser.ParseExpression("level == LogLevel.Warn"), foregroundColor: ConsoleOutputColor.Yellow, backgroundColor: ConsoleOutputColor.White)); */ config.AddTarget(consoleTarget); config.AddRule(NLog.LogLevel.Trace, NLog.LogLevel.Fatal, consoleTarget); LogManager.Configuration = config; services.AddLogging(loggingBuilder => { loggingBuilder.ClearProviders(); loggingBuilder.SetMinimumLevel(Microsoft.Extensions.Logging.LogLevel.Trace); loggingBuilder.AddNLog(config); }); } } In addition, CouchDB no longer listens on 5986 so an HAProxy redirect is required to keep that functioning. RuhNet helped with that and it's below: frontend couch-5986-admin-port bind 127.0.0.1:15986 default_backend couch-redir-node-admin-port backend couch-redir-node-admin-port balance roundrobin #HAProxy < 2.0 uncomment the following #reqrep ^([^\ :]*)\ /(.*) \1\ /_node/_local/\2 #HAProxy 2.0 and above uncomment the following #http-request replace-uri ^/(.*) /_node/_local/\1 server couch1 172.31.12.34:5984 check server couch2 172.31.23.45:5984 check server couch3 172.31.34.56:5984 check Lastly, CouchDB no longer supports the Interator function and needs to be replaced with .forEach. Per Ruhnet, the following needs to be done but we have yet to test this. We're writing a script that will look through all documents and check for occurrences of Iterator so they can be replaced. { "_id": "_design/trunkstore", "language": "javascript", "views": { "crossbar_listing": { "map": "function(doc) { if (doc.pvt_type != 'sys_info' || doc.pvt_deleted) return; emit(doc._id, {'realm': doc.account.auth_realm}); }", "reduce": "_count" }, "lookup_did": { "map": "function(doc) { if(doc.pvt_type != 'sys_info' || doc.pvt_deleted ) return; var realm = doc.account.auth_realm; if(doc.servers) { doc.servers.forEach(function(srv) { var auth_clone = JSON.parse(JSON.stringify(srv.auth)); auth_clone.auth_realm = realm; if (srv.options.enabled != false && srv.DIDs) { for (var did in srv.DIDs) { emit(did, { 'callerid_server': srv.callerid || '', 'callerid_account': doc.callerid || '', 'e911_callerid_server': srv.e911_callerid || '', 'e911_callerid_account': doc.e911_callerid || '', 'auth': auth_clone, 'DID_Opts': srv.DIDs[did], 'inbound_format': srv.inbound_format || 'npan', 'server': srv.options, 'account': doc.account}); } } }) } }" }, "lookup_user_flags": { "map": "function(doc) { if(doc.pvt_type != 'sys_info') return; var realm = doc.account.auth_realm; if(doc.call_restriction) { var call_restriction = JSON.parse(JSON.stringify(doc.call_restriction)) }; if(doc.servers) { var acct_clone = JSON.parse(JSON.stringify(doc.account)); doc.servers.forEach(function(srv) { if (srv.auth) { var srv_clone = JSON.parse(JSON.stringify(srv)); srv_clone.auth.auth_realm = realm; emit([realm.toLowerCase(), srv_clone.auth.auth_user.toLowerCase()], {\"server\": srv_clone, \"account\": acct_clone, \"call_restriction\": call_restriction}); } }) }}" }, "lookup_did.old": { "map": "function(doc) { if(doc.pvt_type != 'sys_info' || doc.pvt_deleted ) return; var realm = doc.account.auth_realm; if(doc.servers) { var srvs = Iterator(doc.servers); for (var srv in srvs) { var auth_clone = JSON.parse(JSON.stringify(srv[1].auth)); auth_clone.auth_realm = realm; if (srv[1].enabled != false && srv[1].DIDs) { var DIDs = Iterator(srv[1].DIDs); for (var DID in DIDs) { emit(DID[0], { 'callerid_server': srv[1].callerid || '', 'callerid_account': doc.callerid || '', 'e911_callerid_server': srv[1].e911_callerid || '', 'e911_callerid_account': doc.e911_callerid || '', 'auth': auth_clone, 'DID_Opts': DID[1], 'inbound_format': srv[1].inbound_format || 'npan', 'server': srv[1].options, 'account': doc.account}); } } } } }" }, "lookup_user_flags.old": { "map": "function(doc) { if(doc.pvt_type != 'sys_info') return; var realm = doc.account.auth_realm; if(doc.call_restriction) { var call_restriction = JSON.parse(JSON.stringify(doc.call_restriction)) }; if(doc.servers) { var acct_clone = JSON.parse(JSON.stringify(doc.account)); var srvs = Iterator(doc.servers); for (var srv in srvs) { if (srv[1].auth) { var srv_clone = JSON.parse(JSON.stringify(srv[1])); srv_clone.auth.auth_realm = realm; emit([realm.toLowerCase(), srv_clone.auth.auth_user.toLowerCase()], {\"server\": srv_clone, \"account\": acct_clone, \"call_restriction\": call_restriction}); } } }}" } } }
  4. Awesome, thank you! We've been migrating databases for the past 24 hours. For whatever reason the connection from BigCouch times out and we have to re-run the _replicate on it numerous times to finally get all the documents over. Have you seen anything like this? After running the same command a few (or a few dozen) times, everything eventually makes it over. We tried everything from connection timeout, waiting between replicates, etc with the same result. % curl -X POST http://user:pass@xx.xx.xx.xx:5984/_replicate \ -H "Content-Type: application/json" \ -d '{ "source": "http://xx.xx.xx.xx:5984/anonymous_cdrs", "target": "http://user:pass@xx.xx.xx.xx:5984/anonymous_cdrs", "create_target": true, "connection_timeout": 1000000 }' {"error":"error","reason":"{http_request_failed,\"POST\",\n \"http://xx.xx.xx.xx:5984/anonymous_cdrs/_bulk_get?latest=true&revs=true&attachments=false\",\n {error,sel_conn_closed}}"}
  5. Well, that didn't last long
  6. No more yearly quater news?
  7. So about that change? Been over a couple of years, and if one is not logged in, still cannot see the forum at all.
  8. Sure---and you are correct, it needs a redirect since the path has changed. All you need is something like this: frontend couch-5986-admin-port bind 127.0.0.1:15986 default_backend couch-redir-node-admin-port backend couch-redir-node-admin-port balance roundrobin reqrep ^([^\ :]*)\ /(.*) \1\ /_node/_local/\2 #http-request replace-uri ^/(.*) /_node/_local/\1 server couch1 172.31.12.34:5984 check server couch2 172.31.23.45:5984 check server couch3 172.31.34.56:5984 check If you are using a version of HAProxy v2.0 or later then comment out the `reqrep` line and uncomment the `http-request replace-uri` line.
  9. Hi RuhNet/Mooseable, Our BigCouch cluster is old and dying and we're looking to move to CouchDB (preferably v3). Is it possible to provide any information as to what's needed to be done with HAProxy to redirect the admin port (5986) to 5984? It's not just simple forwarding but rewriting the URL's, correct?
  10. Soft transition from customers that are on office cloud based systems (with similar FromIP) and cloud based system to kazoo onprem. Some have more advanced SDK driven call control applications to then dial 4 digit dial across and in terms of migrations. CustA is on existing PBX but we are slowing added customer to kazoo so interim we do 4 digit dialing. Other PBX cluster systems without DID number to just dial xxx over a trunk like Mitel - Freepbx days but in this case its def different as anything and everything is DID driven. We did also do a customer and created fake DIDs 1xxx-xxx-xxxxt to insert/remove and auto assign but there is a headache in managing this way for each customer on xxxx did. It did work for 1 Customer. and not dynamic to any callflow (unless maybe we modify the insert) OR what if we modified FS directory and did a domain matching based and takes out the dynamicness of Kazoo. I did do a route you suggested a few years back with adding Device as IP Auth and it woks that way for cust1.kaz.com but when I add another customer from the same cloud Carrier. I Try to call from the same cloud system custb.domain.com to kazoo cluster cust2.kaz.com domain routing but it chooses that account with the device IP auth. We thought of doing fake IPs per customer to auth based but hard to find a viable solution. i thought trunkstore would do the trick or potentially modifying the function doc in the lookup_did /lookup_user_flags but that is way far beyond my scope. Or maybe we look at a kamailio front proxy to route different things but If there is something like this what would be it called to even consider a feature request for the PBX trunkstore config. Like me i thought sip domain based auth routing it would just work ext to ext dialing.
  11. I guess I would ask what purpose the other tenent-based cloud system is providing and why you are using KAZOO (also a tenant-based cloud system) as a class 5 switch for them? Connectivity/trunkstore is more class 4 switching hence why extension dialing isn't supported
  12. Oh so if we have multiple customer on the Same FROM IP (tenant based cloud system) this wont work. will this be something on 5.x if it comes out to community. workaround i swapped the from header as the to domain also from domain. and put a X-Header in to process via a middle proxy agent box but that seems hacky but it works but hoping it could treat it like a pbx trunk 4digit dialing across.
  13. No, extension dialing is not supported by connectivity/trunkstore devices. KAZOO will recv the dialed number from your PBX and try to route it via the carriers (the stepswitch app in KAZOO). So 5200 won't be routed if you use it. IP auth for cust1 will require all calls from that IP to associate with cust1 on KAZOO.
  14. You can use couchdb v3 fine on 4.3. it's more about couchdb zoning, which is well documented. This is the discord invite again for another 7 days. It's not a support channel though, it's for those wanting to contribute to efforts in maintaining 4.3. https://discord.gg/r77hSf89
  15. @mc_ Question on this (i have a multi-tenant PBX) that wants to route via cust2.kazoo.sip.com DNS name to multiple Tenants on kazoo cluster. If i add IP auth on 1 account it worked but now i have another tenant from same PBX and i need send the same 5200@cust2.kazoo.sip.com (the DNS FROM is different) but i think cause IP auth it all goes to 5200@cust1.kazoo.sip.com even though i called 5200@cust2.kazoo.sip.com is there a way to make it pure like PBX trunk send to 5200.cust2.kazoo.sip.com can it use DNS to route that . (4 digit dialing between PBX to kazoo) back in the past i did a add device / authentication IP auth (but this trunk has SIP based auth too so should it work that way> ?) would i be able to PATCH the connectivity API to inbound route like this ? { "servers": [ { "server_name": "pbxa.sip.telexxxx.com", "auth": { "auth_method": "username", "auth_user": "user93k92", "auth_realm": "pbxa.sip.telexxxx.com" }, "options": { "inbound_format": "domain", "enabled": true, "media_handling": "bypass", "force_outbound": false } } ] }
  16. Don't want to get too off topic with this thread, but: Couch v2 is the supported/approved version, but I use v3. You need to modify a Trunkstore view (if you use Trunkstore) and do a redirection of the admin port using HAProxy or other server like Nginx/Caddy/whatever. There is a thread about the Trunkstore view mod here in the forums. Placement and zoning depends on your cluster size. If you are in a single DC, then there isn't a reason to do multi-zone, unless you have a large number of servers. The most common scenario I encounter when setting up clients is they have 2 zones, and I recommend a minimum of 3 DB servers in each if they want full redundancy with good performance. Then for placement I usually recommend a 2+2 split; in other words put 2 copies of each document in the local zone, and 2 copies in the remote zone. NOTE that CouchDB zones are disconnected from and have no relationship to Kazoo zones---it's a totally different system. You could have a single zone Kazoo, with 5 Couch zones if you wanted, or vice versa. I generally recommend to have as few zones as you can get by with, both for Couch and for Kazoo. (I sent you an invite as well)
  17. Does anyone have an updated install plan for creating the multizone cluster on couchdb 2/3? Im having trouble understanding the placement and zoning options for couch, but would like to be running on a newer version when building a 4.3 cluster. Also can you post the discord invite again, it expired. id love to be able to help in anyway that i can.
  18. There's also the 4.3 build that has been done for 4.3 via the kazoo-classic repo. https://github.com/kazoo-classic/kazoo/releases/tag/v4.3-1-alpha I'd like to thank the others in the Discord for their monumental effort to getting 4.3 running on newer OS's and updated dependencies.
  19. From discord : In an effort to try and make things easier for people to test I've uploaded pre-built releases: kazoo-4.4: https://github.com/kageds/kazoo_applications/releases/download/0.1/kazoo-4.4.tar.gz and Freeswitch: https://github.com/kageds/freeswitch/releases/download/0.1/freeswitch.TGZ
  20. until

    It's like, 1am my time, but are these still going on 2 years later? :P
  21. That is disappointing to hear, but not unexpected. As always James thank you for advocating for all of us!!!
  22. Sadly the update is 5.x release is now targeting early 2025. Your guess is as good as mine whether that target will be hit. Continual assurances on open sourcing being the goal but the delays keep mounting so...
  23. @mc_ Have we heard any internal updates on the 5.x open source release yet?
  24. Hello, I have customer with 3 users overseas (I believe using Yealink). I need to provision their phones to Kazoo, but before I factory reset thier phones I'd liketo make sure everything is working fine. So instead of reseting thier phones is there a way I could set up the Provisioner to use "Account 2" only on these 3 phones to test that the extensions and outbound calling works before wiping the phone?
  25. Worse case scenario we can cherry pick improvement commits and “front-port” them to v5 if/when it gets released publicly.
  26. For anyone in the same situation, wanting to continue to run 4.3 but on a supported OS, it's been hard forked to https://github.com/kazoo-classic as I'm going to assume 2600hz/ooma won't want to maintain or accept PRs for 4.3. Repos will be updated as we test fresh deployments of 4.3 with installation instructions. That said, I'd rather put efforts into contributing to v5, but its closed source so I'm doing what I can :)
  1. Load more activity
×
×
  • Create New...