Stop backend API workflows from having timeouts

Hello, my backend API workflow is very important to my application, and a single timeout causes a lot of issues. Is there any way I can increase the timeout from 60 seconds to something greater to ensure my workflows run as intended? They call one another so if one scheduled workflow gets timed out, everything that was scheduled after that will not work. This is because I am using them to run workflows recursively since bubble can not make many database changes at once, so I do them in batches of 25 or 50. So a backend workflow for example will make a change to 50 items, then it will schedule itself to fire again and change another 50 items. But its way too unreliable, timeouts cause it to almost never complete a full list of items, even if its only doing 300 or so database changes, since that would require 6 workflows to all run without timing out, which is not often.

Hi there @Drahgoone,

I would recommend running recursive workflows in this case.

1 Like

How do you mean?

Are you scheduling WFs from a list?

If so I recommend checking out this article by @petter

I am essentially using both of these methods. Basically the workflow is like this:

Change one entry on 25 items → schedule workflow on list for those 25 items → schedule workflow to run again if its not done with the total amount that it has been requested to run on.

So if I want to change 500 things, I will use this and it will work on it in batches of 25 (or sometimes 50) and normally it will do it fine. But if site traffic is heavier than usual, it won’t be able to, often having a timeout somewhere down the line so only 300 of the 500 is finished because if it has a timeout, it can not schedule itself to keep going, so it freezes. I used to do it completely recursive working on 1 item at a time (with no run on list) rather than 25 but that was extremely slow and just as unreliable.

No. You can’t change the timeout duration per an email from bubble support to me. I run a long randomization function that can take 20 minutes on a super fast google server. Had to love it there when bubble wouldn’t do.

3 Likes

In our bulk processing development we have found the only reliable way to make more than a few hundred database operations is to slow drip recurisve workflows off peak load.

We are currently investigating and benchmarking our latest development in performant bulk Bubble database operations, which we are calling “throttled concurrency”. To date we are able to robustly dispatch dozens of throttled recursive workflows in parallel and have operations on tens of thousands records run reliably. The basic idea is that we set up many recursive workflows each throttled to one second between iterations and run them in parralell, relying on natural server latency to introduce some jitter to the timing.

How far can we push this technique? We do not know yet. We are exploring how far this can be scaled in various types of plans, from personal to dedicated. Being able to cut the throttled processing time down by a factor of 20 or even possibly 100 can make a huge difference. What we do know is that we can have tens of thousands of workflows scheduled in the queue without impacting capacity, because the queue spaces the workflow executions out over time.

When tackling these problems it really helps to carefully differentiate between telling the server to plan to do something (cheap) versus telling the server to directly do something (expensive).

Back Story

An old climbing partner of mine used to work for Westgrid on optimizing job scheduling in high performance compute environments. Our conversations about parralell process design, nearly two decades ago, still inform a great deal of how I reason through high volume problems.

4 Likes

This topic was automatically closed after 70 days. New replies are no longer allowed.