Accelerating dataprocessing

We have had to limit the data update and generation functions due to timeout of the bubble engine.
Example 1: we want to create 1 million new records in a table, but Bubble times out at about 30,000.
Example2: we want to update 1 million records, but the engine timesout at about 10,000 records.

We are probably pushing the capabilities too hard, but looking for any immediate advice that might help us either control the time out limit or increase the processing speed?


Given the large volume of data involved, it would be advisable for you to consider utilizing platforms that focus on databases. There are also no-code/low-code platforms available.

I would definitely look at the Data API features for bulk creation like this. It’s much much faster than using Workflows. The Data API - Bubble Docs

Another piece of advice would be to use Recursive workflows. Matt Neary has an excellent walkthrough of this here: Recursive workflows in Bubble (OR, HOW TO LOOP THROUGH A LIST) - YouTube They are not necessarily speedy when talking about the datasets you’re using, but it is more reliable than “Schedule workflow on a list” which is what it sounds like you’re doing. There’s also more ways to validate or re-run results.


Of course you can’t expect the 1 million things to create instantly, but you can do the Data API like @flowtron mentioned and recursively do that in chunks of a couple hundred or so.

1 Like