i have create a few work flow to stress test my apps.
( every 0.01 sec ) ( create a thing )
( every 0.01 sec ) ( modify a thing )
--------------work as expected, and I have a glance on my plan limit ------------------------
My PROBLEM –
After the test, it created 15k records of data.
i create a simple work flow,
delete a list of thing
search order ( no filter, search all order )
it will run into errors on browser, after like 30 seconds.
and when I go back to DB, from bubble backend.
it manage to delete 400~500 of its.
My Question, does this can solve through upgrade my plan?
( my user may needed to delete hundreds of db too )
( what if they run into the browser error too? )
In my experience Bubble doesn’t handle the creation or deletion of large volumes of data in a short space of time via workflows well. I don’t think this is a plan related issue and more restrictions Bubble puts in place to manage overall capacity.
I’m sure it would be something address on an enterprise plan. But even with client apps on the Production plan this seems to act in a similar manner to an app on the pro or personal plans
I’m not sure there is a Bubble centric solution for this sorry.
Josh @ Support Dept
Helping no-code founders get unstuck fast save hours, & ship faster with an expert on-demand
You can nearly verbatim follow the instructions in the Bubble documentation to recursively delete records in 100-500 chunks in workflows iteratively scheduled one second apart. Or you can use the data tab’s bulk operation provided you do not close the browser tab.
We have used either method to clean up tens of thousands of records. The main thing is having the patience to run throttled workflows that slow drip your processing.
We are in the middle of investigating a potential solution that provides throttled processing with O(2 * N^1/2) scaling in time while gaureenteing eventual consistency. It is the last part about eventual consistency that we are working on establishing.
The only way we found to get anything to complete is to heavily throttle our workflows so that every single one is scheduled one second apart. And then further break everything down to very small atomicized workflows that do as little as possible. Our capacity logs will show tens of thousands of workflows scheduled. But they are run slowly over an extended period to limit actual processor consumption.
What it boils down to is that scheduling is cheap, while processing is expensive.