Running multiple backend workflows

Does anyome have information on the scalability of backend workflow.

So say I need to do … 5000 API calls “overnight”. What is the best way to schedule them? How many can run in parallel before we hit trouble?

e.g. 5000 calls every 5 seconds (single threaded) is c. 7 hours.

So what if I had 3 sets of 5000 running every 5 seconds?


Don’t know how helpful this is but I created 50k records in around 6 hours from a recursive workflow gone awry.

The workflow had 5 action steps in it including a couple of searches.

1 Like


It works faster if you build out a recursive workflow.
It goes crazy when you schedule huge lists using the standard method, even if it has simple steps.

1 Like

I’ve had issues when running backend workflows recursively overnight…what almost always happens to me is there ends up being some kind of ‘slow down’. I’ve experienced it on several apps and for several different types of recursive workflows.

Usually when I have the app open, the workflows are moving ahead at the speed set based on interval, so if it were 5 second, I’d get around 12 records every minute…but once I shut down my device, this drops considerably, and what should have been finished by the time I woke up, was not.

I’ve posted about it in the past with no responses from Bubble

Can’t find the other post, but there was a second if I recall correctly.


This has been an issue I ran into before. I’ve been meaning to write to Bubble to see what their “official” way of doing recursive workflows was. I’ll keep you all updated on that.

Thanks, yes that has been my experience too.

My worry with a recursive wf is that you have no way to monitor if it is “working”. Or at least not that I can think of.

So if it fails to schedule itself, how do you restart?

Maybe a periodic check to see if a) there are things to process b) the list to process is going down.

OK, so a bit of an update on Recursive workflows.

The key here seems to be the plan you are on, not in terms of “CPU” but … a limit on the number of workflows.

A Pro plan will happily chug along at 2 recursive workflows per second on 5 parallel streams for hours on end. So we can process 50,000 rows (simple updates) in around 6 hours. This was around 60% “CPU”.

A hobby plan is not that much slower, but would unexpectedly halt. Not sure if this is Bubble saying “Too many workflows” or it is a bug.

@allenyang if would be good to know if there is a limit on the number of workflows that an app can run in period, based on plan? @philledille has a bug outstanding with you on this.

1 Like

There shouldn’t be a limit on the number of workflows that can run in a given period, but rather it’s based on capacity, and Hobby plans definitely have much less capacity than Production plans

I’d love to hear more from others on their experience with recursive workflows, particularly as it can be such a benefit for certain types of apps. In fact, we built an app that is completely dependent on recursive WFs, so related issues (potential bugs, performance efficiencies) have been ongoing ‘concerns’ for us. There also seem to be various ways to approach this, so I’d love to learn more about what works best for others.

Nigel, you mentioned a worry about how to monitor whether a recursive WF is working. If it helps, we’ve done a few things to help alleviate our worries:

⦁ Add a ‘Failsafe’ step at the end of all recursive workflows that fires ‘only when’ the recursive Schedule API step is empty (i.e., when the recursive WF didn’t reschedule itself). The failsafe sends an email to us with details on the project error.

⦁ Add a ‘count’ field in the relevant data type that increases or decreases by 1 every time the recursive WF fires. For example, we send emails to project participants on a regular basis, so have an API first count the number of participants (partCount=100) and then another resursive WF add 1 to ‘partEmailsCount’ field for that project whenever an email is sent. We can then check to see if partCount=partEmailsCount.

⦁ Create a SuperUser dashboard with a RG that lists any project errors that would have occurred due to a failed recursive WF. For example, projects where partCount doesn’t equal partEmailsCount. We then have a button that allows us to restart that recursive WF for that project (and passes partEmailsCount as a parameter so that we can start where we left off).

⦁ Before a recursive WF launches, we schedule an ‘error check’ API a few hours later that emails us with a list of all project-critical information (e.g., whether there are any null fields that should contain important data). We did this because an early version of the app had a bug that passed a blank parameter forward in the recursive WF that was required to create records successfully.

⦁ Increased the interval before the recursive WF schedules itself (e.g., current date + 10 seconds), to ensure the whole WF processes before it recurs. In other words, ensuring that WFs don’t ‘pile up’ and cause a timeout or other problem that would end the recursive WF.

This may be overkill but it helps a worrier like me sleep a little better. :slight_smile:

Speaking of which, we recently landed a client who wants to use the app for 1,000-1,500 employees (launching in June), including bulk uploading participants, creating monthly surveys for them, and sending a set of survey invitations and reminders every few weeks - all using recursive WFs (the app provides a way to get a regular ‘pulse’ on employee performance, stress, and engagement). So, any war stories or advice from others in a similar situation would be appreciated! :slight_smile:


There is a helpful option that @vladlarin suggested.
You can create a table called Log. The next step is to create fields for this new table. The fields depend on your logic. The idea is to create each new Log for the iteration step. You can monitor via the DB the progress of a recursion.

I updated that flow a bit.
You can create a field called LogMain type of Log.
Before starting a recursion, you need to create a new Log. That Log is to handle the whole recursive stuff.
For each iteration, you create a new child Log and populate to its LogMain field the Log that you created before starting your recursion.

So, now you can control the process: pause/resume, cancel, reschedule. (adding additional fields of course)

That is exactly what we did.

We now have 5 different “log” tables that do all the controlling of the multi-stage process.

All good so far :+1:

1 Like

Actually we now have 6 “log” tables as this is a 9 step process with 4 API with 4 API calls.

Essentially Parabola rebuild in Bubble.

Thank you for sharing how you’re currently handling this. I was just brainstorming a “super user” or system admin type of dashboard for this very reason.

1 Like

Absolutely, happy to help and share ideas!

It’s been quite an experience building and managing our own app, and such a mix of emotions (I have genuine empathy for programmers now that I’ve had similar first-hand experience!). It’s wonderful being able to test new ideas so easily and create something useful, but there’s also that constant worry of ‘breaking’ it or missing something important. We actually had a problem happen with our very first client, which prevented them from accessing the app at all. It was very embarrassing, and this was when we were relying on another Bubble expert, so we couldn’t fix it until they were available. On the bright side, it motivated us to learn Bubble ourselves and we haven’t looked back.

I’ve also been coming to terms with the fact that there’s always something new to learn about Bubble, and that there aren’t clear best practices (maybe there should be) other than those you pick up by reading through the forums. For example, a singular ‘step-by-step guide to creating a resilient recursive backend workflow’, including how to use commands like ‘only when list minus: list item#1: count > 0’ to tell Bubble when to repeat the WF. I couldn’t believe it when I found that post - we were doing it a different way previously, so had to rebuild that process! :slight_smile:

In fact, it’s the same with passing along the helpful advice about API logs that he happened to hear from another Bubbler, and that @NigelG has been using. Makes me wonder - why isn’t this a documented best practice?

My best advice to anyone is to read read read the forums and make sure to do extensive testing before rolling out any new features. And, of course, to build in those failsafes we talked about, so that you’ll know when something breaks and how to fix it. Also, read The Ultimate Guide to Bubble Performance - 156 pages of optimizing tips, which is a best practice guide written by @petter. Btw, @petter , let me know if you’re ever offering an app audit service, as we’d love to have an expert review and optimize our app :slight_smile:


100% agreed. This feels like quite a big area, and somewhat underserved by Bubble tech at the moment in terms of “best practice” to time to crowd-source our own.

What I would also mention is that dispute some serious misgivings when we started the “multiple recursive workflow” build … Bubble handles it really really well.

As long as you single thread anything that updates a single record, workflows are consistent.