If you want the most stable setup, the answer is simple: use a small backend service / worker whose only job is to read the file incrementally.
Bubble uploads the file (Bubble already stores it in S3), then you pass the file URL to this worker. It reads the file line by line or in small byte chunks and sends 200–500-line batches to Xano (or does the matching itself). At no point does anything load the full file into memory.
Make and n8n can still be useful, but they are not truly streaming for this kind of workload. They typically download the whole file and convert it to text before chunking, which means you will eventually hit size, memory, or execution limits as files get larger. Note that n8n webhooks have a hard request size limit of around 16 MB, so with files this size you’re already at the ceiling.
So practically,
For maximum stability: Bubble upload → file URL → small backend worker → batched matching → Bubble results
If you use Make or n8n: treat them as orchestration tools and expect fragility
Hi, would you mind sharing exactly the type of file and what you want to accomplish? Can also in a DM.
I am finalizing a n8n competitor which is having a very performant architecture and by design has no problems handling very large files. At least in theory. So would be great to have this use case and see how we can support this.
Without going into detail, our engine is fully llm native which means it can follow llm instructions building workflows and can include plain vanilla scripts also written by llm to process this.
Workflow could be that you post the file to the workflow engine api and when done it can output it to a webhook or something. Chunked or not. Or directly push it to xano.
We did ran all kinds of test with advanced text processing but not more than 100.000 lines which took about 1-2 seconds. So speed will depend a bit on what you need to extract and number of rows.
If the user upload the file, while not to have a client side script (that will be the cheapest option) to split the file and send this to a backend WF as a list?
You’d set this up directly on Cloudflare using Cloudflare Workers.
Create a free Cloudflare account → go to Workers & Pages → create a new Worker. You can do it entirely from their dashboard without any prior setup.
The Worker is just a small JS endpoint that Bubble calls with the file URL and an upload ID. It fetches the file, streams it, batches the lines, and sends them to your backend.
Happy to DM you a concrete example and links if that helps.
You can split the file in the browser and send chunks to backend workflows, and it’s usually the cheapest route. The downside is reliability. (you need the browser has to stay open, resuming it is harder, and you still need to handle retries and deduping). It also means more logic and surface area on the client.
For large files where you want it to just run and finish, a small worker is usually more stable since all the chunking happens on server side.
Take less than a second to split the file. So not a big thing. After, it’s to send in the backend WF and process the data that will take more time. There’s 643161 in this file so this mean 7 Schedule on a list actions.