Jump to content
KAZOOcon: hackathon signup and details here! ×

Call recording per channel


Te Matau

Recommended Posts

@lazedo @mc_ Thanks for confirming what would be a good way to enhance the AWS S3 authentication functionality. 

Are objects stored using the new Kazoo attachment storage abstraction (using AWS S3, Google Drive or other methods) intended to be accessible and retrievable by Kazoo after initial storage? For example, once a call recording MP3 or a voicemail message MP3 is stored using the new storage abstraction, does Kazoo access this file for future access or playback? If yes, would crossbar be used to access these files using the new Kazoo attachment storage abstraction?

Link to comment
Share on other sites

  • 2600Hz Employees

Correct. Kazoo will still try to read the voicemail, for instance, for playback during a call to check a voicemail box. Kazoo will not delete attachments, however, if the metadata is deleted. Where the attachment resides is transparent to the higher level apps like Crossbar - the low-level driver will fetch the attachment and hand it up to Crossbar, cf_voicemail, wherever it is needed.

Link to comment
Share on other sites

  • 2 weeks later...

@Karl Anderson I have looked at the schema references provided and am confused when relating to the example object you provided in your post. In the call_recording object you have "account" and "endpoint". These don't show up in the schema references. Can you explain? Is it sufficient to use "any" to cover both inbound and outbound and for onnet and offnet? Lastly, there is a flag I saw somewhere (wish I could remember) that only records answered calls (record_on_answer). Is this supported?

I am trying to get this working by attaching the object to the user document. I am working on the hosted platform,

Thanks.

Link to comment
Share on other sites

I am referring to the list below. I looked at the accounts schema (which I should have done previously) and do see account and endpoint listed. The user schema does not have a similar sub objects for endpoint, only the call_recording object is listed. Do you know if recording at the user level is supported? And, do I need to list "any", "inbound" and "outbound" along with "any", "onnet" and "offnet", or should just using "any" cover all cases.

Thanks.

 

https://github.com/2600hz/kazoo/blob/master/applications/crossbar/priv/couchdb/schemas/call_recording.json

https://github.com/2600hz/kazoo/blob/master/applications/crossbar/priv/couchdb/schemas/call_recording.parameters.json

https://github.com/2600hz/kazoo/blob/master/applications/crossbar/priv/couchdb/schemas/call_recording.source.json

Link to comment
Share on other sites

  • 2600Hz Employees

yes, user and/or device is supported.

as i referenced before, at the account level you can the settings for the account and settings that endpoints (user / device) can inherit.

but..., since you're on the hosted platform, i believe monster-ui is going to be updated later today, and it does have a call-recordings app where you can set all these settings

Link to comment
Share on other sites

@lazedo What is the minimum to make call recording work? Nothing I've tried appears to work. I've PATCH the below to Account, but it doesn't send to the URL.

 

{
  "data": {
    "call_recording": {
      "account": {
        "any": {
          "offnet": {
            "enabled": true,
            "format": "mp3",
            "url": "http://myurl.com"
          },
          "onnet": {
            "enabled": true,
            "format": "mp3",
            "url": "http://myurl.com"
          }
        }
      },
      "endpoint": {
        "any": {
          "offnet": {
            "enabled": true,
            "format": "mp3",
            "url": "http://myurl.com"
          },
          "onnet": {
            "enabled": true,
            "format": "mp3",
            "url": "http://myurl.com"
          }
        }
      }
    }
  }
}

Link to comment
Share on other sites

  • 3 weeks later...

Graham,

We have a s3 account and an access key. How do I give the header information that carries the Authorization? As of now, we can only see a location to put the server URL but not the header.

 

Thanks,

Varun

 

On 8/25/2017 at 11:43 AM, Graham Nelson-Zutter said:

@lazedo We've created PCAPs both before and after changing from our older AWS IAM user to our new AWS master user.

We see new credentials are being used since to updated the /storage document. However, we see the same HTTP header format being sent to AWS S3 as below (see below). 

PUT /{account_db}-201708/201708-{media_id}.mp3 HTTP/1.1
Content-Type: 
authorization: AWS {IAM-ID}:{IAM-Secret-Hash}
User-Agent: hackney/1.6.2
content-md5: {md5}
date: Thu, 24 Aug 2017 14:17:48 GMT
Host: {s3_bucket}.{s3_host}

Are you seeing a different HTTP header format being sent to AWS S3 when you test? Can you provide an example here?

thanks,
Graham

 

Link to comment
Share on other sites

  • 2 weeks later...

Hello, guys. Hope you don't mind that I've interfered in your discussion. ))

I can't understand why my Kazoo (master) tries to save media files using default proxy params from system_config/media doc though I want it to save recordings to S3.

Oct 17 09:59:41 kzdev 2600hz[10256]: |NTY4OTU1ZjQwMWM4NTcyMzMxZGMxZWRiMTgyNmFjMzg.|kapps_call_command:3129 (<0.2934.0>) Error Storing File /tmp/10f704222e135bfcf08220c4bcf7c1ff.mp3 From Media Server freeswitch@kzdev.example.com : Received HTTP error 0 trying to save /tmp/10f704222e135bfcf08220c4bcf7c1ff.mp3 to http://d7f346ee18b67a81:43e86f68d5204a88@kzdev:24517/store/g2gEbQAAADdhY2NvdW50JTJGYzglMkYwYyUyRmMxMzA1ZTVkNjA0ZTZkNDY1OGQ2ZTZjYzk2MTctMjAxNzEwbQAAACcyMDE3MTAtOTM1MTNiZjk0YWZiNTNlODcyNTY5YmRiZDA1OTI4YjFtAAAAJDEwZjcwNDIyMmUxMzViZmNmMDgyMjBjNGJjZjdjMWZmLm1wM2wAAAABaAJkAAhkb2NfdHlwZW0AAAAOY2FsbF9yZWNvcmRpbmdq/10f704222e135bfcf08220c4bcf7c1ff.mp3

I've configured account-level params for recording, created storage and storage/plan, but can't force kazoo to send files to the proper place.

Link to comment
Share on other sites

  • 2600Hz Employees
5 hours ago, Alexander Mustafin said:

Received HTTP error

this is usually a dns error.

freeswitch will send to kazoo media proxy which will save the attachment. the storage plan will send the attachment to the provider (s3, gdrive, gstorage, OneDrive, dropbox, azure, http, ftp).

to be clear, freeswitch will not connect to s3 or other provider using the storage plans.

Link to comment
Share on other sites

Thank you!

Definitely, FS uses short DNS name and it wasn't written to /etc/hosts. I've fixed it, but now get more complicated 500 error. 

I'm using all-in-one server for tests, so I dumped 24517 port where FS sends mp3 file.

root@kzdev:/home/admin/# ss -antp|grep 24517
LISTEN     0      1000                      *:24517                    *:*      users:(("beam.smp",pid=10256,fd=27))
T 10.2.0.237:24517 -> 10.2.0.237:12388 [AP]
HTTP/1.1 500 Internal Server Error.
server: Cowboy.
date: Tue, 17 Oct 2017 16:53:36 GMT.
content-length: 304.
.
{socket_error,
    {nxdomain,
        [{lhttpc_client,send_request,1,
             [{file,"src/lhttpc_client.erl"},{line,222}]},
         {lhttpc_client,execute,9,[{file,"src/lhttpc_client.erl"},{line,171}]},
         {lhttpc_client,request,9,
             [{file,"src/lhttpc_client.erl"},{line,93}]}]}}

No idea which hostname the Cowboy tries to resolve, but I checked the short name, the fdqn and the hostname of S3 endpoint - and they are resolvable. Unfortunately, there is nothing in the main log about this 500 error.

Link to comment
Share on other sites

I have to say that my dev-server is also placed on Amazon infrastructure.

 "plan": {
    "modb": {
      "types": {
        "call_recording": {
          "attachments": {
            "handler": "859619ec3b764362982d76b2919b602d"
          }
        },
        "mailbox_message": {
          "attachments": {
            "handler": "859619ec3b764362982d76b2919b602d"
          }
        }
      }
    }
  },
  "attachments": {
    "859619ec3b764362982d76b2919b602d": {
      "settings": {
        "secret": "{AWS_SECRET}",
        "key": "{AWS_KEY}",
        "bucket": "devcallrec",
        "scheme": "https",
        "region": "eu-central-1"
      },
      "name": "Kazoo S3",
      "handler": "s3"
    }
  },

Also tried to add the "host" param to the configuration, but with no luck

Link to comment
Share on other sites

  • 2600Hz Employees

a quick at the code shows that the problem maybe related to region encoding, upstream library expects string() not binary().

if you can edit & compile in your environment, you can try this.

in core/kazoo_attachments/kzt_att_s3.erl line 40

-    Region = maps:get('region', Map, 'undefined'),
+    Region = case maps:get('region', Map, 'undefined') of
+                 'undefined' -> 'undefined';
+                 Bin -> kz_term:to_list(Bin)
+             end,
 
Link to comment
Share on other sites

  • 6 months later...

Just following up on my original request: "Somewhere I thought I read that is now possible to record each leg of a call separately"

When a call is recorded it stores it as a single stereo file with inbound audio on one channel and outbound audio on the other. So it's easy to split the file into two mono files and then run each through a speech recognition and sentiment analysis algorithm.

Thanks 2600hz!

Link to comment
Share on other sites

×
×
  • Create New...