Are there any plans to improve automatic balancing of available capacity?
Some of my workflows crash because of “Workflow error - Operation timed out – app too busy” (whatever this means).
I spend a lot of time (literally days!!!) trying to balance how things work, so that I can get some sort of result.
I personally still do not understand this concept of “capacity” and “additional units”. Whatever it is, “reserved capacity” is not being utilised, and crashes at below 25% (why would I pay more, if I still have 75% “capacity”). Please see Bubble’s own graph suggests.
So, what is the point of “reserved” and even what is “capacity”?
How can I scale if my workflows crash, and I don’t even get a notification?!
Time spent “playing up” with these “capacity / performance” issues, wipes up the value…
Your issue may not be a capacity one. According to the error message, this seem to be a timedout issue. Operation may be too long to process. Can you explain more what you are trying to do (A big import?)…
My issue is that I loop a function and it crashes.
Say, I want to delete 50k records, for example…
If we need to understand why it crashes, I may as well become a code writer and go to build my own infrastructure.
I am sure we all know that Bubble’s infrastructure has pre-set time-outs… Why do I need to guess them and space my loops accordingly, and then still expect the function to suddenly crash?..
The loop take too long to process. this is not a capacity issue and not related to the graph you show. Did you try to send your request to an API Workflow instead?
But like I told you. In your case this is not a capacity error but a timed out error. This have nothing todo with the capacity and balance load. The process just take too much time too process.
If you send the same request using Schedule API workflow on a list, this may fix that.
There’s a lot of topic about that. Case are always different, so there’s a lot of possible answer. But in your case, I think you should look at this one: Server Logs: Workflow error- Operation timed out -- app too busy
and maybe this one too: PP - Make changes to a list of things... stop after 35 raws
More capacity results in lower processing time. Timeouts should not be users’ problem. Timeouts are the Bubble’s lack of auto balancing. Why? Because if we “Schedule API workflow on a list” to delete 50k, Bubble will crash. If I delete 1 record per minute, Bubble either take ages and crash or it will crash.
The point is that I PERSONALLY see a very limited scalability. Processing 100-1000 records, maybe… But if one has more than that with some linking logic to it, there is a good chance things will crash / timeout …
Only Bubble can either be quite on this and let us find out when we try to scale or they magically fix the issue.
If you already send it to Schedule API workflow and you face the same issue, I suggest to send a support ticket. I don’t think it’s normal in your case because you don’t reach the max capacity.
And no, timed out and not always a lack of auto-balancing
I don’t reach maximum capacity on that ducking graph because I spent days balancing so that I stay below 30%, and I can get some hours out of my function. That is why.
this is a complex thing. I think that all backend stuff should not have this kind of issue.
But in page workflow, this is normal and most app will send large request in backend becaus of that.
Yes and this is the way to go.
But I agree with you that Bubble should do a better job to manage large request in backend.
But I think that user need to do a step too. It’s hard to know for bubble if this request need to be processed right now or can be balanced/throttling for example. App are so differents. Maybe just add an option to this kind of workflow to "Doesnt allow this workflow to use more than 25% of max capacity) What do you think? And balance the request to complete it
I pay $125 per month. My average use is below 10%.
I do not care about my average use and my “capacity”.
User’s press a button and it should take X to complete their function/request. If they pay more it should take less time to process their function/request. How it is done, we do not need to know. Agree an “Doesnt allow this workflow to use more than 25% of max capacity”, but again everything can be balanced. Queue functions/requests, assign priority indexes to them, whatever… If we start thinking of this, again, I may as well build my own infrastructure.
you’re absolutely right about time out. Bubble will make the error available so that we can handle it. Also the roadmap says ‘No timeouts for long operations’. In the meantime, knowing this weakness, we have to analyze the loop, which takes more than 5 minutes, and cut it into several pieces. This should not happen when less than 10% capacity is used throughout the month. It will take another algorhythm on the Bubble side to assess this condition or simply make the time out to 15 or 30 minutes for those who pay the price imo.
If you look on a smaller scale, you’ll see the time out, but I guess you already know it, and is not the point of discussion
I appreciate the focus on “overall performance” (whatever this means), however the two are very close. Bubble claim to be “for data-driven apps”, is it not right?
The “overall performance”, I guess, could be measure by how quickly repeating groups are displayed? If so, the process of loading data for these groups is very similar to “pushing data” into function for processing. Processing of “pulling” data or “pushing” data, I assume, somewhat depends on a user’s “capacity”? Hence, balancing should be the key focus.
If users start coming up with clever ways to balance load manually Bubble will lose money on those who will find a way to keep their “data-driven” apps at 95% capacity. Those who won’t find a way, will soon realise that simplicity in interface is wiped out but lack of reliability.
I have analysed my very simple loop and adjusted it to run below 30%, but still it crashed every few hours… So, if my look was simple (less than 3-5 minutes), we have a situation where either Bubble crashes it (like a security thing, so no one drains their AWS account) or something else (other workflow) caused the overall timeout… My simple workflow didn’t crash when I had intensive daily flows (they run successfully too), so I have a feeling that Bubble crashes long flows for whatever reason. Maybe there is a limit of the queue, but it certainly feels like there is a simple reason why things time out. The important thing, I believe, is the fact that it feel like all these units story is artificial. The real processing capacity is the same, it is just given differently for different users. Hence is this “giving out” process crashes, my flows crash