Jump to content
KAZOOcon: hackathon signup and details here! ×

Call recording per channel


Te Matau

Recommended Posts

Somewhere I thought I read that is now possible to record each leg of a call separately but I can't remember where I read that or figure out how to do it. Any clues appreciated.

On a related question, is it possible to stream call audio in real time to another destination (i.e. in addition to the SIP end-points). I'm thinking of live transcription as a possible use case.

Link to comment
Share on other sites

  • 2600Hz Employees

Howdy,

We are still documenting this new functionality, and you should plan on deploying this in Kazoo 4.1 as that is the first version we consider it stable.

 

To get you started there is a new object that you can place on the account, user or device called 'call_recording'.  Setting the appropriate values will trigger call recording and store it in the database or the provided URL.  Using the storage plans you can have these recordings pushed external to the main bigcouch cluster.

 

The parameter schema is available here:

https://github.com/2600hz/kazoo/blob/master/applications/crossbar/priv/couchdb/schemas/call_recording.json

https://github.com/2600hz/kazoo/blob/master/applications/crossbar/priv/couchdb/schemas/call_recording.parameters.json

https://github.com/2600hz/kazoo/blob/master/applications/crossbar/priv/couchdb/schemas/call_recording.source.json

 

Here is an example payload for the account (the same object could be applied to a user or device):

{  
   "data":{  
      "name":"Example Account",
      "realm":"3d885c.sip.example.com",
      "call_recording":{  
         "account":{  
            "inbound":{  
               "onnet":{  
                  "enabled":false
               },
               "offnet":{  
                  "enabled":false
               }
            },
            "outbound":{  
               "onnet":{  
                  "enabled":false
               },
               "offnet":{  
                  "enabled":false
               }
            }
         },
         "endpoint":{  
            "inbound":{  
               "onnet":{  
                  "enabled":false
               },
               "offnet":{  
                  "enabled":true
               }
            },
            "outbound":{  
               "onnet":{  
                  "enabled":false
               },
               "offnet":{  
                  "enabled":true
               }
            }
         }
      }
   }
}

 

The current version of the recording API (for listing and fetching) is here:

https://docs.2600hz.com/dev/applications/crossbar/doc/recordings/

 

Of course the original method of placing a recording start action in the callflow is still support as well.

 

Link to comment
Share on other sites

Quote

...and you should plan on deploying this in Kazoo 4.1 as that is the first version we consider it stable.

I wanted to thank 2600hz for taking call-recording to the next level as the new implementation will greatly assist with our development work.

We prematurely started using and testing the v4.1 code with some hiccups, (didn't pay heed to warnings) but the 2600hz support team has been fantastic on providing assistance.  Way to go 2600hz and thanks for putting up with our over zealous nature. 

Link to comment
Share on other sites

  • 1 month later...

@FASTDEVICE @Karl Anderson This is indeed next level awesome feature. We're excited to use it. :D

We've started testing call recording MP3 storage with AWS S3 in v4.1.26 (we've testing in 4.0.58 too).

We have the prerequisite /storage and /storage/plans documents created. However, when we place a test call with recording enabled, we try see Kazoo trying to PUT to AWS S3 with the following error in /var/log/kazoo/kazoo.log.

The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256.

When we look at the header that Kazoo is sending to AWS S3, it does indeed look like an older header format is being used. 

PUT /{account_db}-201708/201708-{media_id}.mp3 HTTP/1.1
Content-Type: 
authorization: AWS {IAM-ID}:{IAM-Secret-Hash}
User-Agent: hackney/1.6.2
content-md5: {md5}
date: Thu, 24 Aug 2017 14:17:48 GMT
Host: {s3_bucket}.{s3_host}

When we look at the AWS4-HMAC-SHA256 header format as defined by AWS, we see the following example:

Authorization: AWS4-HMAC-SHA256 
Credential=AKIAIOSFODNN7EXAMPLE/20130524/us-east-1/s3/aws4_request, 
SignedHeaders=host;range;x-amz-date,
Signature=fe5f80f77d5fa3beca038a248ff027d0445342fe2855ddc963176630326f1024

Please refer to http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-auth-using-authorization-header.html

From what we see, these headers are pretty different in format. 

We do see provisions for the AWS4-HMAC-SHA256 format in Kazoo source in /core/kazoo_attachments/src/aws/kz_aws.erl in 4.0.
https://github.com/2600hz/kazoo/blob/4.0/core/kazoo_attachments/src/aws/kz_aws.erl

Perhaps AWS4-HMAC-SHA256 has just not been implemented yet?

Is this AWS S3 header format issue isolated to our usage? Is anyone else having the same issue?

thanks,
Graham

 

 

Link to comment
Share on other sites

@lazedo Thanks for the follow-up on this issue!

We have an IAM user with a IAM-defined permissions policy (see below) to access this specific AWS bucket. 

This same IAM user is able to list, put, get and delete objects from this bucket using both aws-cli tool as well as our own python-based tool. 

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:ListAllMyBuckets"
            ],
            "Resource": "arn:aws:s3:::*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket",
                "s3:GetBucketLocation",
                "s3:PutObject",
                "s3:GetObject",
                "s3:DeleteObject",
                "s3:ListObjects"
            ],
            "Resource": [
                "arn:aws:s3:::{bucket_name}",
                "arn:aws:s3:::{bucket_name}/*"
            ]
        }
    ]
}

How does 2600Hz set their access policies to AWS S3 buckets? Does 2600Hz use AWS S3 services on only 3rd party S3-like services. 

thanks,
Graham

Link to comment
Share on other sites

  • 2600Hz Employees

@Graham Nelson-Zutter just to add, when I wrote the storage blog article, I just signed up for S3, got my access and secret tokens, plugged them in as in the article, and got uploads working. It's possible/probable that's not the typical way folks setup their S3 accounts (I have pretty much no knowledge of S3 or Amazon properties). So I can only say that the instructions presented in the blog worked for me for a minimalist setup on S3.

Obviously it would be great to have more detailed instructions for other S3 setups and adjust the code accordingly (if necessary).

Link to comment
Share on other sites

 @lazedo @mc_ Thank you both for your fast follow-up on my posts. AWS permissions are indeed complicated. 

I think what I'm understanding is that the credential that you used with AWS S3 were those of master AWS account user which controls the entire AWS account, not individual AWS IAM type users that have the ability of greater control and access restrictions to AWS services and resources.  Please confirm if this is the case. 

In our case (and others AWS S3 we've come across), AWS IAM users are created in order to provide more fine tuned access to individual AWS resources (e.g. limiting access to only the S3 service instead of granting access to all services at AWS).

Additionally, specific to the AWS S3 service, AWS IAM users can be restricted to only list and read contents of a an AWS S3 bucket, while allowing another AWS IAM user list and full read/write access to the same AWS S3 bucket.

Using my AWS web console as the master account user, I was able to issue new master Access Key ID and Secret Access Key pair. 

Access Key ID:
{master-access-key-id}  20 characters long 
Secret Access Key:
{master-secret-access-key} 30 characters long

We've gone ahead and updated our account storage configuration using the new master AWS account credentials instead of the former AWS IAM credentials. 

GET https://{kazoo-apps-host}:8443/v2/accounts/{account_id}/storage

"data": {
        "attachments": {
            "{user-generated-uuid}": {
                "handler": "s3",
                "name": "Amazon S3",
                "settings": {
                    "bucket": "{AWS-S3-bucket-name}",
                    "host": "{regional-AWS-S3-hostname}",
                    "key": "{master-access-key-id}",
                    "scheme": "https",
                    "secret": "{master-secret-access-key}"
                }
            }
        },
        "id": "{couched_doc_id}",
        "plan": {
            "modb": {
                "types": {
                    "call_recording": {
                        "attachments": {
                            "handler": "{matching-user-generated-uuid-above}"
                        }
                    }
                }
            }
        }
    }

Unfortunately, we're still seeing the error 400 The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256. from AWS S3 for our test call using Kazoo v4.1.26. 

Are you able to reproduce the same errors? Are you using the same variable like "host" and "scheme"? 

Any ideas would be great! We're happy to keep testing ;)

thanks,
Graham

Link to comment
Share on other sites

@lazedo We've created PCAPs both before and after changing from our older AWS IAM user to our new AWS master user.

We see new credentials are being used since to updated the /storage document. However, we see the same HTTP header format being sent to AWS S3 as below (see below). 

PUT /{account_db}-201708/201708-{media_id}.mp3 HTTP/1.1
Content-Type: 
authorization: AWS {IAM-ID}:{IAM-Secret-Hash}
User-Agent: hackney/1.6.2
content-md5: {md5}
date: Thu, 24 Aug 2017 14:17:48 GMT
Host: {s3_bucket}.{s3_host}

Are you seeing a different HTTP header format being sent to AWS S3 when you test? Can you provide an example here?

thanks,
Graham

Link to comment
Share on other sites

  • 2600Hz Employees

you can try this,

# kazoo-applications remote_console

> hackney_trace:enable(80, "/tmp/hackney.log").

(do the call with recording enabled)

> hackney_trace:disable().

^C

# cat /tmp/hackney.log

just tried :)

 

Link to comment
Share on other sites

@lazedo Thanks for the debugging info. Getting the below error.

# kazoo-applications connect remote_console
Erlang/OTP 18 [erts-7.3] [source] [64-bit] [smp:2:2] [async-threads:10] [kernel-poll:false]

Eshell V7.3  (abort with ^G)
(kazoo_apps@{hostname})1> hackney_trace:enable(80, "/tmp/hackney.log").
** exception error: undefined function dbg:tracer/2
     in function  hackney_trace:do_enable/3 (src/hackney_trace.erl, line 47)
 

Also, can you confirm that you're successfully uploading to AWS S3? If you are, can you show the HTTP header that's working?

Link to comment
Share on other sites

  • 2600Hz Employees

hmm, seems like a missing dependency in the build.

yes, i just did before the previous post.

https://gist.github.com/lazedo/269ad7da4f6a585648afa79f2ed80f9d

and i switch from aws & minio and they both work.

the only difference that i see in the your configuration is the explicitness of 

 "host": "{regional-AWS-S3-hostname}",

 

 

Link to comment
Share on other sites

@lazedo Thank you for the successful example. The header format that worked for you continues to not appear to follow the AWS4-HMAC-SHA256 format. 

I've taken your suggestion and removed the host variable. In the kazoo log I can see the default s3.amazonaws.com is now used instead of the regional host name I was testing before. Even with no host set and the default s3.amazonaws.com used, I continue to see the error 400 back from AWS S3 "The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256."

Can you describe the AWS S3 account configuration you're testing with? Is this a personal testing account? What settings are enabled in the AWS S3 bucket?

 

Link to comment
Share on other sites

@lazedo I've setup an entirely new AWS account and created a new S3 bucket with all of the default credentials. In the S3, when forced to select region, I selected Canada Central. Testing recording again, I see the new AWS user ID {master-access-key-id} being used. However, I continue to get error 400  "The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256."

Which AWS S3 region are you testing with? What release of Kazoo are you using (I'm on v4.1.26).

thanks,
Graham

 

Link to comment
Share on other sites

@lazedo Thank you for identifying the issue as being the AWS region where the S3 bucket is hosted. I confirm that we are now able to upload call recording attachment MP3 files to AWS S3, as long as avoid using any of the AWS regions that support only v4 AWS authentication. 

I wonder what Amazon's reasons are for forcing stronger v4 AWS authentication (AWS4-HMAC-SHA256) in some regions but not others?

At this time, the following regions support only v4 AWS authentication (AWS4-HMAC-SHA256)

  • Canada (Central)
  • US East (Ohio)
  • Asia Pacific (Mumbai)
  • Asia Pacific (Seoul)
  • EU (Frankfurt)
  • EU (London)

All of the remaining AWS regions support both v2 and v4 AWS authentication. See list here: http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region

Not being able to use the Canada Central region for AWS S3 presents some significant challenges for us regarding data sovereignty and privacy and our compliance with laws like PIPEDA (Canada wide) and FOIPPA (in British Columbia). If we were able to upload to the Canada Central region for AWS S3, we would be compliant. My understanding is that just about every other regional government has similar laws and regulations regarding data sovereignty and privacy. 

Politics and compliance aside, it looks like other S3 based projects are taking AWS's lead and implementing v4 AWS authentication (AWS4-HMAC-SHA256) in their tools. Ceph is often used as a network based block storage server. Ceph also has a S3 object storage server using the same back-end. Read on: https://javiermunhoz.com/blog/2016/03/01/aws-signature-version-4-goes-upstream-in-ceph.html 

Thanks for your help!

Graham

Link to comment
Share on other sites

  • 2600Hz Employees

@Graham Nelson-Zutter

we have v4 implemented in the underlying aws library, but that was not used by our s3 attachment handler at the time of implementation.

enabling v4 presents some challenges but i think it can be easily done, its a matter of prioritisation (or maybe someone wants to sponsor that).

Best

 

Link to comment
Share on other sites

×
×
  • Create New...