We are developing a DAM and are successfully uploading and saving files to the bubble S3 service on AWS.
Rather than securing these files using the bubble ‘attach’ facility, we would like to hand-out temporary URLs of the underlying files. To do this we would like to take a ‘file’ (stored on bubble S3) and create a copy of the file on bubble S3, that we will delete later.
Using a backend workflow, when we try to create a new ‘file’:saved to S3 from the URL of the current ‘file’, it appears to simply keep the same file on S3, and not create a copy.
Is there a way to take an S3 ‘file’ from a ‘thing’ and create a copy of that file in a different S3 ‘file’, without bubble using the same file on S3?
Seems that the primary file field-belongs to the same asset in your workflow
Thank you, yes was just a snippet. First step created the file using purely the url of the original file, then this step repeats it using the save to s3 component.
In your experience, can you clone an s3 file purely by setting one new file equal to a previous one? Every way I do it I’m. Getting the original file again - never a new one saved.
Create a new data-type called file, add a file type field called file, then in your workflow add a step to create a new thing called file and add the step where the s3 file was created to this step’s file
Thank you - yes, does that work for you?
I know! But it simply clones the URL of the S3 file, i.e. it copies the reference to the same file on S3, rather than creating a new S3 file
I’m now experimenting with referencing the S3 file via a ‘fetch’ URL on cloudinary, to see if I can fool bubble into saving a new file…
OK - slowly dawning on me that I have previously misunderstood how bubble thinks about files, and why it doesn’t know how to do what I want.
First - bubble thinks about files primarily as a URL reference to it on a 3rd party service, and has it’s own S3 service to help this.
To first get it onto the 3rd party service we can use the bubble file uploader services which can store on Bubble’s S3 service, or use any other storage mechanism that ultimately returns a URL.
So, when we need to ‘copy’ a file, bubble is not thinking that IT will move a file from one place to another. It CAN provide the URL of the original file, but it CAN NOT save a new one - not without going back through the uploader.
In our case, we want to use a workflow to clone a file, so we’d need to use a API or something to make a copy of the file and store it somewhere else - without (and this is the tricky bit) leaving a reference to the original file unencrypted within the resultant URL (e.g. cloudinary requires the original URL to be within the URL, so the end user could still by-pass the cloudinary URL and go direct, which is what we’re trying to avoid).
If anyone would like to correct me on this, please do!
SOLVED! OK, so the file option on the API connector ([New Feature] File option on API Connector plugin) allows for a file to be fetched from a URL and saved automatically in S3!
So I created a dumb API call:
…to which I pass the URL of the source file, it then ‘fetches’ it and stores a new S3 file - and hands back the URL.
This topic was automatically closed after 70 days. New replies are no longer allowed.