[Feature Enhancement] Improved Performance and Reliability During Background Processing

Hi everyone,

I’m Steve, a product manager here at Bubble for our Scale team. I’m excited to share big improvements to the way Bubble manages the execution of scheduled API workflows on our new pricing plans. We’ve made changes to how server capacity is allocated that help protect your app’s user experience from being impacted during scheduled backend workflows. These changes also greatly improve overall reliability and performance. Specifically, they:

  • Isolate server capacity for scheduled workflows and other background tasks so that they don’t degrade app performance for your end-users
  • Eliminate server capacity errors for workflows that are in progress
  • Improve performance when you run a large number of scheduled API workflows

Note: The benefits described in this post are only on our new workload-based pricing plans. If your app is still running on a legacy plan, you can upgrade any time.

Server capacity isolation for background and foreground workflows
Users create scheduled API workflows to complete important administrative tasks that support their apps — for example, importing large amounts of data from an external source, re-calculating values, cleaning up unnecessary entries, updating database schemas, and countless more. These bulk tasks typically leverage either Schedule API Workflow on a List (see recent forum post) or recursive API workflows to queue up and execute these jobs.

Before this update, running these kinds of heavy backend processes could severely slow your app’s performance, and even cause some client-side workflows to time out completely. Some users find ways to work around this by scheduling these tasks to run overnight when traffic is lower, inserting delays between the scheduled execution times to avoid overloading the app, or breaking the work into batches to schedule and run separately. All of these workarounds come with significant tradeoffs and add to the complexity of building and maintaining your app.

Moving forward, these types of scheduled workflows will consume server capacity from an isolated “bucket” that’s entirely separate from the foreground workflows that power the user experience of your app. This allows you to schedule your bulk operations to run at a time and frequency that makes sense for your needs, without worrying about affecting your users.

Eliminating server capacity errors for workflows that are in process
Some other effects of these changes are a bit more subtle, but the impact is better overall reliability, uptime, and performance.

Previously, when a single “bucket” of server capacity was shared for all tasks, Bubble would sometimes cancel actively running backend workflows in order to allow frontend workflows to complete. While this was an effective way to make sure that an app’s frontend would remain functional during heavy backend processing, it could result in a frustrating situation where some scheduled workflows were only partially completed. Plus, the overhead to enable this imperfect solution soaked up significant amounts of memory and computing power, which negatively impacted app performance.

Results will vary based on the complexity of the API workflow being scheduled, but the impact from reducing overhead and increasing reliability is a meaningful performance boost for scheduled API workflows. In a benchmark test using Schedule API Workflow on a List to schedule 40,000 simple API workflows, the execution time was reduced by about two-thirds, from 106 minutes to just 33 minutes, and capacity errors were completely eliminated.

The bottom line
In addition to everything noted above, this change also significantly strengthens the underlying foundation of the Bubble platform. This is a key milestone in our journey to enhance the capabilities of Bubble to support apps at any scale. If you are still running on a legacy plan, make sure to upgrade to take advantage of these changes and the rest of our exciting scalability improvements on the roadmap for 2024.


Great work @steven.harrington


This is really big. Great update and kudos to the team.


Nice update!

Thanks for the improvements Bubble Team :slight_smile:

So it is limited to the current plans, as I thought it would. All those legacy apps are gonna die harder than I thought.




Thanks for the early Christmas present! @steven.harrington & team


Do these updates extend to enterprise plans without workload unit restrictions?


I have the same q. I’m on a legacy dedicated plan.

Sounds like a great step forward! Thanks for the hard work!

This also explains all the posts about scheduled workflows suddenly cutting off or not running at all. Always wondered why and now we know that it involved something that was beyond our control.

1 Like

this is a very great news ! it was so painfull


I thought in the new workload unit based plans applications were not subjected to capacity limits anyway?


Thanks for the update, Steve – sounds great.

A question (as a scaling production user still on the legacy plans): on the new pricing plans, capacity as a metric is retired, replaced instead with WUs.

Given standard pricing plans use the main (shared) cluster rather than a dedicated server per app, I’m a little confused how, prior to this change, running a lot of scheduled BWFs could cause user / front end performance issues?

Given the work is being done on the main cluster and (I’m told by the Bubble sales team) capacity should no longer be an issue assuming you’re not at extreme amounts of compute / work.

What am I misunderstanding?


Nice update!


Did you find out an answer to this?

No, I didn’t. And given the replies from others, it doesn’t seem like many others have this question. I was wondering if I misunderstood how it works.

I’d also like to know the answer to this.

@steven.harrington please update us on this confusion. Many of us are not clear on what exactly is changing here and why is capacity even a discussion point in new WU based scheme.


Thanks for the thoughtful questions! Here are some answers that we hope will help.

Capacity on legacy vs workload-based plans
In my original post, I used the word capacity to talk about our internal server capacity. I want to clarify how this is different from the pricing metric of “capacity” in the legacy plans:

  • In legacy plans, when you hit the capacity limit for your plan (based on throughput in a given period), your app’s performance would be throttled until capacity recovered.
  • In workload-based plans, capacity limits are not part of the pricing model. This increased flexibility enables the improvements described in the original post. However, like all SaaS platforms, there are still constraints in place to protect infrastructure and ensure reliability.

Applicability for different plans
For the reasons described above, this update only applies to apps running on any of the workload-based pricing plans. If your app is still on a legacy plan, you’ll need to upgrade to unlock these reliability and performance improvements.

This update is also not applicable to apps running on Enterprise plan with a dedicated instance. Dedicated instances are an inherently different environment from the main cluster, so the logic for handling server capacity allocation is different as well.

If we haven’t addressed your specific question, please reach out to our Support team via the AI-powered chatbot on our Support center.Your unique use case may require a deeper look. As always, we hope you’ll keep adding feature requests to our Ideaboard and submit any bugs via our bug report form. Thank you for helping us make Bubble better!


Thanks @steven.harrington