Jump to content

Recommended Posts

Posted

Hey there,

I have a storage plan configured which points to an AWS bucket we configured for a customer. That customer now wants to host their own bucket which is fine. We transferred any recordings or files to the customer's bucket, but now when you try to access the files via the kazoo api, it fails. Do we need to "update" something somewhere since we change the bucket?

Thanks,

Joe

Posted (edited)

As I understand it, the S3 parameters are stored with each recording itself so that they can be referenced and loaded from their source even if the current recording location has since changed. 

You may be able to update the pvt_attachments.handler field for each recording record to point at the new storage location, but I've honestly never tried.   Alternatively updating the handler used for the previous S3 storage instead of adding it as a new one may work here. 

Edited by AnthonyP (see edit history)
Posted (edited)
On 9/25/2020 at 2:51 AM, Joseph Watson said:

Hey there,

I have a storage plan configured which points to an AWS bucket we configured for a customer. That customer now wants to host their own bucket which is fine. We transferred any recordings or files to the customer's bucket, but now when you try to access the files via the kazoo api, it fails. Do we need to "update" something somewhere since we change the bucket?

Thanks,

Joe

Hi, Assuming the new bucket has been given read permissions from the Kazoo app server, then make sure that each file and folder in s3 has been moved and set with the right permissions as well.    It has happened to me in the past that i push a file to s3 without settings permissions to it and can't be read. 

Test using AWS cli from kazoo app server

Edited by jes_tovar (see edit history)
Posted

The permissions are set fine. New recordings are going into the bucket, and we can access them via the API. Its any recordings we transferred from the old bucket over to the new bucket we have an issue with. I need to somehow update those records so they reflect the right bucket or something.

There has to be a sup command to do this.

Posted
1 minute ago, Joseph Watson said:

The permissions are set fine. New recordings are going into the bucket, and we can access them via the API. Its any recordings we transferred from the old bucket over to the new bucket we have an issue with. I need to somehow update those records so they reflect the right bucket or something.

There has to be a sup command to do this

oh i gotcha, otherwise you can script it to go through each record in modb and update it with new storage plan id?

Posted
11 minutes ago, Joseph Watson said:

any idea what a script like that would look like. Honestly I am not support wonderful with couchdb. So any help would be fantastic

i would suggest to check first on each monthly database for the account in couchdb and verify the objects corresponding to each recording are pointing to the old storage plan or any sort of reference to the old s3 bucket.   Then, the next step would be to jump into writing a python or any other language to connect to couchdb and change the value for each one of the files for each monthly database. 

there are some sample scripts in the community scripts repo. 

  • 2 weeks later...
Posted

So I have an update. After doing some debugging, I found that the customer just deleted the AWS Access Key. I attempted to update the storage plan with a new key, but it appears that kazoo is still looking for the old key. What can I do now?

 

kz_att_s3:271(<0.17218.4495>) S3 error: {aws_error,{http_error,403,"Forbidden",<<"<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Error><Code>InvalidAccessKeyId</Code><Message>The AWS Access Key Id you provided does not exist in our records.</Message><AWSAccessKeyId>AKIAJXP6D4AAZFZOJ2YA</AWSAccessKeyId>

×
×
  • Create New...