Hi everyone,
I’m Steve, a product manager here at Bubble for our Scale team. I’m excited to share big improvements to the way Bubble manages the execution of scheduled API workflows on our new pricing plans. We’ve made changes to how server capacity is allocated that help protect your app’s user experience from being impacted during scheduled backend workflows. These changes also greatly improve overall reliability and performance. Specifically, they:
- Isolate server capacity for scheduled workflows and other background tasks so that they don’t degrade app performance for your end-users
- Eliminate server capacity errors for workflows that are in progress
- Improve performance when you run a large number of scheduled API workflows
Note: The benefits described in this post are only on our new workload-based pricing plans. If your app is still running on a legacy plan, you can upgrade any time.
Server capacity isolation for background and foreground workflows
Users create scheduled API workflows to complete important administrative tasks that support their apps — for example, importing large amounts of data from an external source, re-calculating values, cleaning up unnecessary entries, updating database schemas, and countless more. These bulk tasks typically leverage either Schedule API Workflow on a List (see recent forum post) or recursive API workflows to queue up and execute these jobs.
Before this update, running these kinds of heavy backend processes could severely slow your app’s performance, and even cause some client-side workflows to time out completely. Some users find ways to work around this by scheduling these tasks to run overnight when traffic is lower, inserting delays between the scheduled execution times to avoid overloading the app, or breaking the work into batches to schedule and run separately. All of these workarounds come with significant tradeoffs and add to the complexity of building and maintaining your app.
Moving forward, these types of scheduled workflows will consume server capacity from an isolated “bucket” that’s entirely separate from the foreground workflows that power the user experience of your app. This allows you to schedule your bulk operations to run at a time and frequency that makes sense for your needs, without worrying about affecting your users.
Eliminating server capacity errors for workflows that are in process
Some other effects of these changes are a bit more subtle, but the impact is better overall reliability, uptime, and performance.
Previously, when a single “bucket” of server capacity was shared for all tasks, Bubble would sometimes cancel actively running backend workflows in order to allow frontend workflows to complete. While this was an effective way to make sure that an app’s frontend would remain functional during heavy backend processing, it could result in a frustrating situation where some scheduled workflows were only partially completed. Plus, the overhead to enable this imperfect solution soaked up significant amounts of memory and computing power, which negatively impacted app performance.
Results will vary based on the complexity of the API workflow being scheduled, but the impact from reducing overhead and increasing reliability is a meaningful performance boost for scheduled API workflows. In a benchmark test using Schedule API Workflow on a List to schedule 40,000 simple API workflows, the execution time was reduced by about two-thirds, from 106 minutes to just 33 minutes, and capacity errors were completely eliminated.
The bottom line
In addition to everything noted above, this change also significantly strengthens the underlying foundation of the Bubble platform. This is a key milestone in our journey to enhance the capabilities of Bubble to support apps at any scale. If you are still running on a legacy plan, make sure to upgrade to take advantage of these changes and the rest of our exciting scalability improvements on the roadmap for 2024.