-
Posts
16 -
Joined
-
Last visited
-
Days Won
3
Recent Profile Visitors
The recent visitors block is disabled and is not being shown to other users.
-
@lazedo @mc_ Thanks for confirming what would be a good way to enhance the AWS S3 authentication functionality. Are objects stored using the new Kazoo attachment storage abstraction (using AWS S3, Google Drive or other methods) intended to be accessible and retrievable by Kazoo after initial storage? For example, once a call recording MP3 or a voicemail message MP3 is stored using the new storage abstraction, does Kazoo access this file for future access or playback? If yes, would crossbar be used to access these files using the new Kazoo attachment storage abstraction?
-
@lazedo I had seen that v4 AWS authentication (AWS4-HMAC-SHA256) was included in the libraries used by Kazoo. Perhaps the best compromise would likely be to add the ability to toggle between both v2 and v4 using a new "auth_type":"v4" property within the storage plan? thanks, Graham
-
@lazedo Thank you for identifying the issue as being the AWS region where the S3 bucket is hosted. I confirm that we are now able to upload call recording attachment MP3 files to AWS S3, as long as avoid using any of the AWS regions that support only v4 AWS authentication. I wonder what Amazon's reasons are for forcing stronger v4 AWS authentication (AWS4-HMAC-SHA256) in some regions but not others? At this time, the following regions support only v4 AWS authentication (AWS4-HMAC-SHA256) Canada (Central) US East (Ohio) Asia Pacific (Mumbai) Asia Pacific (Seoul) EU (Frankfurt) EU (London) All of the remaining AWS regions support both v2 and v4 AWS authentication. See list here: http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region Not being able to use the Canada Central region for AWS S3 presents some significant challenges for us regarding data sovereignty and privacy and our compliance with laws like PIPEDA (Canada wide) and FOIPPA (in British Columbia). If we were able to upload to the Canada Central region for AWS S3, we would be compliant. My understanding is that just about every other regional government has similar laws and regulations regarding data sovereignty and privacy. Politics and compliance aside, it looks like other S3 based projects are taking AWS's lead and implementing v4 AWS authentication (AWS4-HMAC-SHA256) in their tools. Ceph is often used as a network based block storage server. Ceph also has a S3 object storage server using the same back-end. Read on: https://javiermunhoz.com/blog/2016/03/01/aws-signature-version-4-goes-upstream-in-ceph.html Thanks for your help! Graham
-
@lazedo I've setup an entirely new AWS account and created a new S3 bucket with all of the default credentials. In the S3, when forced to select region, I selected Canada Central. Testing recording again, I see the new AWS user ID {master-access-key-id} being used. However, I continue to get error 400 "The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256." Which AWS S3 region are you testing with? What release of Kazoo are you using (I'm on v4.1.26). thanks, Graham
-
@lazedo Thank you for the successful example. The header format that worked for you continues to not appear to follow the AWS4-HMAC-SHA256 format. I've taken your suggestion and removed the host variable. In the kazoo log I can see the default s3.amazonaws.com is now used instead of the regional host name I was testing before. Even with no host set and the default s3.amazonaws.com used, I continue to see the error 400 back from AWS S3 "The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256." Can you describe the AWS S3 account configuration you're testing with? Is this a personal testing account? What settings are enabled in the AWS S3 bucket?
-
@lazedo Thanks for the debugging info. Getting the below error. # kazoo-applications connect remote_console Erlang/OTP 18 [erts-7.3] [source] [64-bit] [smp:2:2] [async-threads:10] [kernel-poll:false] Eshell V7.3 (abort with ^G) (kazoo_apps@{hostname})1> hackney_trace:enable(80, "/tmp/hackney.log"). ** exception error: undefined function dbg:tracer/2 in function hackney_trace:do_enable/3 (src/hackney_trace.erl, line 47) Also, can you confirm that you're successfully uploading to AWS S3? If you are, can you show the HTTP header that's working?
-
@lazedo We've created PCAPs both before and after changing from our older AWS IAM user to our new AWS master user. We see new credentials are being used since to updated the /storage document. However, we see the same HTTP header format being sent to AWS S3 as below (see below). PUT /{account_db}-201708/201708-{media_id}.mp3 HTTP/1.1 Content-Type: authorization: AWS {IAM-ID}:{IAM-Secret-Hash} User-Agent: hackney/1.6.2 content-md5: {md5} date: Thu, 24 Aug 2017 14:17:48 GMT Host: {s3_bucket}.{s3_host} Are you seeing a different HTTP header format being sent to AWS S3 when you test? Can you provide an example here? thanks, Graham
-
@Anthony Manzella Yes, both AWS S3 and Google Drive would be great! Do you happen to have an AWS S3 account you could validate these HTTP header issues with? It would be great to verify that our config or environment aren't the cause. cheers, Graham
-
@mc_ Ok great! Please let us know if you'd like any help with AWS IAM permissions. In addition to cURL testing, having the aws-cli installed on your workstation can be a big help.
-
@lazedo @mc_ Thank you both for your fast follow-up on my posts. AWS permissions are indeed complicated. I think what I'm understanding is that the credential that you used with AWS S3 were those of master AWS account user which controls the entire AWS account, not individual AWS IAM type users that have the ability of greater control and access restrictions to AWS services and resources. Please confirm if this is the case. In our case (and others AWS S3 we've come across), AWS IAM users are created in order to provide more fine tuned access to individual AWS resources (e.g. limiting access to only the S3 service instead of granting access to all services at AWS). Additionally, specific to the AWS S3 service, AWS IAM users can be restricted to only list and read contents of a an AWS S3 bucket, while allowing another AWS IAM user list and full read/write access to the same AWS S3 bucket. Using my AWS web console as the master account user, I was able to issue new master Access Key ID and Secret Access Key pair. Access Key ID: {master-access-key-id} 20 characters long Secret Access Key: {master-secret-access-key} 30 characters long We've gone ahead and updated our account storage configuration using the new master AWS account credentials instead of the former AWS IAM credentials. GET https://{kazoo-apps-host}:8443/v2/accounts/{account_id}/storage "data": { "attachments": { "{user-generated-uuid}": { "handler": "s3", "name": "Amazon S3", "settings": { "bucket": "{AWS-S3-bucket-name}", "host": "{regional-AWS-S3-hostname}", "key": "{master-access-key-id}", "scheme": "https", "secret": "{master-secret-access-key}" } } }, "id": "{couched_doc_id}", "plan": { "modb": { "types": { "call_recording": { "attachments": { "handler": "{matching-user-generated-uuid-above}" } } } } } } Unfortunately, we're still seeing the error 400 The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256. from AWS S3 for our test call using Kazoo v4.1.26. Are you able to reproduce the same errors? Are you using the same variable like "host" and "scheme"? Any ideas would be great! We're happy to keep testing thanks, Graham
-
@Lance Were you able to get the solution you needed?
-
Graham Nelson-Zutter changed their profile photo
-
@lazedo Thanks for the follow-up on this issue! We have an IAM user with a IAM-defined permissions policy (see below) to access this specific AWS bucket. This same IAM user is able to list, put, get and delete objects from this bucket using both aws-cli tool as well as our own python-based tool. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:ListAllMyBuckets" ], "Resource": "arn:aws:s3:::*" }, { "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:GetBucketLocation", "s3:PutObject", "s3:GetObject", "s3:DeleteObject", "s3:ListObjects" ], "Resource": [ "arn:aws:s3:::{bucket_name}", "arn:aws:s3:::{bucket_name}/*" ] } ] } How does 2600Hz set their access policies to AWS S3 buckets? Does 2600Hz use AWS S3 services on only 3rd party S3-like services. thanks, Graham
-
@FASTDEVICE @Karl Anderson This is indeed next level awesome feature. We're excited to use it. We've started testing call recording MP3 storage with AWS S3 in v4.1.26 (we've testing in 4.0.58 too). We have the prerequisite /storage and /storage/plans documents created. However, when we place a test call with recording enabled, we try see Kazoo trying to PUT to AWS S3 with the following error in /var/log/kazoo/kazoo.log. The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256. When we look at the header that Kazoo is sending to AWS S3, it does indeed look like an older header format is being used. PUT /{account_db}-201708/201708-{media_id}.mp3 HTTP/1.1 Content-Type: authorization: AWS {IAM-ID}:{IAM-Secret-Hash} User-Agent: hackney/1.6.2 content-md5: {md5} date: Thu, 24 Aug 2017 14:17:48 GMT Host: {s3_bucket}.{s3_host} When we look at the AWS4-HMAC-SHA256 header format as defined by AWS, we see the following example: Authorization: AWS4-HMAC-SHA256 Credential=AKIAIOSFODNN7EXAMPLE/20130524/us-east-1/s3/aws4_request, SignedHeaders=host;range;x-amz-date, Signature=fe5f80f77d5fa3beca038a248ff027d0445342fe2855ddc963176630326f1024 Please refer to http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-auth-using-authorization-header.html From what we see, these headers are pretty different in format. We do see provisions for the AWS4-HMAC-SHA256 format in Kazoo source in /core/kazoo_attachments/src/aws/kz_aws.erl in 4.0. https://github.com/2600hz/kazoo/blob/4.0/core/kazoo_attachments/src/aws/kz_aws.erl Perhaps AWS4-HMAC-SHA256 has just not been implemented yet? Is this AWS S3 header format issue isolated to our usage? Is anyone else having the same issue? thanks, Graham
-
Hi Lance, The new Qubicle offering looks promising, it just wasn't available when we first needed a call centre solution. We've been using ACDc in production for quite some time. As James said, ACDc is community supported. There are about 6 sponsors and contributors who maintain and augment ACDc. I am aware of several large-scale call centres that count on ACDc each and every day. We personally support a call centre of 75 agents with 3K connected calls per day using ACDc.