Reading large files (5GB) server side action

To all Bubble Devs! hope you’re all well!

@vini_brito

I’m hoping to get a little bit of advice on how to do this and hopefully have this post used as some sort of documentation for other people who may be facing the same problem.

I’m attempting to use a library (server side) in an action to upload a file which can be a maximum of 5GB in size. Now I understand how to do this within standard element actions, we check the following option in our element…

And then we’re able to utilise the instance.uploadFile function to run something like…

instance.uploadFile(instance.data.recordedData, function(err, url) {
    if (err) {
        console.error(err);
        alert("Make sure the file upload options are set correctly within the recorder's element, check the console for error details");
    } else {
        instance.publishState('url', url);
        instance.triggerEvent('saved');
    }
}, null, function(progress) {
    instance.publishState('upload_progress', progress);
});

So how does this work within a server side action? I’m guessing the same rules don’t apply.

The code I’m working with looks like the following.
This is the example taken from the library here.

const gpmfExtract = require('gpmf-extract');
const goproTelemetry = require('gopro-telemetry');
const fs = require('fs');

const file = fs.readFileSync('path_to_your_file.mp4');

gpmfExtract(file)
  .then(extracted => {
    goproTelemetry(extracted, {}, telemetry => {
      fs.writeFileSync('output_path.json', JSON.stringify(telemetry));
      console.log('Telemetry saved as JSON');
    });
  })
  .catch(error => console.error(error));

In a nutshell, what it should do, is read a video file, extract the telemetry data out of it and return that data in either a file format, some URL or just the raw JSON file since it should be structured as XML anyway so either of these options I can work with.

Now, if I just take the above and change the URL to this for example (which is a file stored in my bubble storage)

//s3.amazonaws.com/appforest_uf/f1646857229470x692402763724764900/gx050045.mp4

It will throw the following (this example has the https: protocol added to the URL but either will throw the same type of error). Pretty sure we can’t do this because it has to be read first rather than just attempting to access the URL in this way.

image

So I tweak the code so it looks like this, making sure we GET the file first using context.request.

function(properties, context) {

    const gpmfExtract = require('gpmf-extract');
    const goproTelemetry = require('gopro-telemetry');
    const fs = require('fs');

    // const file = fs.readFileSync('https://s3.amazonaws.com/appforest_uf/f1646857229470x692402763724764900/gx050045.mp4');

    var options = {
        uri: 'https://s3.amazonaws.com/appforest_uf/f1646857229470x692402763724764900/gx050045.mp4',
        method: "GET",
        encoding: null,
        headers: { "Content-type": "application/octet-stream" }
    };
    var file = context.request(options).body;

    gpmfExtract(file)
        .then(extracted => {
        goproTelemetry(extracted, {}, telemetry => {
            // fs.writeFileSync('output_path.json', JSON.stringify(telemetry));
            console.log(JSON.stringify(telemetry));
        });
    })
        .catch(error => console.error(error));

}

Nope, I can’t get that working either, it looks like it’s timing out and I see nothing in the server logs but the browser shows this.

Ok, so let’s just try this instead…

function(properties, context) {

    var url = 'https://s3.amazonaws.com/appforest_uf/f1646857229470x692402763724764900/gx050045.mp4';

    context.request(url, function (err, res) {
        if (res) {
            console.log("something good happened!");
        } else {
            console.log(err);
        }
    });

}

Nope, still can’t get it working, it just throws the same client error above.

Any advice, examples would be very much appreciated. I always have some mental block when it comes with the dealing with file objects in server side code, despite how much I read I can never get it right!

Thanks!

1 Like

Hey!

Yup, if you have a link, you need to download it.
But, there is a problem here. A server-side action has a limitation of 30 seconds per execution.

1 Like

Hey @lottemint.md

Ah ok, I must admit everything I try seems to be pointing towards some sort of timeout issue. Thanks for getting back to us, maybe there’s no reliable way to handle this, will keep on playing with it.

Might wanna turn to a lambda or Firebase function to handle files of this size.

1 Like

Thanks Jared, I’ve asked @cal on the hope he might be able to give us a definite answer anyway but I suspect this is looking like the way to go, shame really.

1 Like

I imagine that you’re dealing with timeout issues also. 4 GB is a large file to not only load but also to then hold within a temporary storage device. We’re not a lot of a lot of space when our server side functions fire up, I’m not sure exactly how much but I do know that there’s a limit.

When you use the FS module you’re bringing a 4 GB memory into your local storage. That could also be part of the problem you might be running out of disk space while you’re trying to run the function

Yep I think you’re right there, I had my suspicions but wanted to ask the question anyway. I’d obviously run into the initial post about enabling the new large file support options and utilising the instance.uploadFile function, but there wasn’t much in the way of doing something similar using server side actions but I guess due to the way they’re handled it’s not really viable to try and process such large files in that way.

The new gen Firebase functions can run for up to 60 minutes. I’d recommend checking those out!

Should be plenty of time to process your file. (Hopefully)

Brill, thanks a lot

1 Like

Agree with Jared and the others. For example Netlify functions is super easy to setup, but they only go up to 1GB.
If you want, AWS Lambda, which means doing it the hard way, goes up to 10GB now, but maybe you will not load the entire video and process it, maybe there is some library out there for you to pipe it instead of load, edit then upload somewhere else.

Perhaps some ready made API for that, in case it’s not your main business case?

1 Like