Need alternative for recursive workflow on big list

Hey! When adding a new Item to my database, one of the things I have to do is calculate the difference between historical prices sequentially like so…

Price 1 - Price 4
Price 2 - Price 5
Price 3 - Price 6

The only way I’ve found to do this is with a recursive workflow but it’s really WU intensive as there are upwards of 4,000 prices to go through each time.

Do you know of a more efficient way than a recursive workflow? Only thing I can think of is to dump it all into a spreadsheet first for the math then upload the CSV but that leaves a lot of room for human error.

1 Like

How about doing it once at the end of the day ? (i.e. scheduling your recursive workflow to run at the end of everyday and do the mass comparison instead of doing it each time a new price is added). How does this sound ?

Does this have to be recursive?

When you add a new item; you can store all the datapoints at the time of creation.

It doesn’t have to be recursive. I’m looking for an alternative to a recursive workflow. But each price is a separate entry and needs to be calculated in a specific order.

1 Like

This is for the initial upload of about 15 years of daily prices from an API. From there, yes, I do an end of day calculation for each day’s price which only deals with about 20 entries each day. It’s the initial upload of all historical prices that’s the issue.

Sorry, meant to ask if it must be recursive.

Any other details that can help us understand?

Hmm… hopefully this helps. lol

Each entry is a price that needs to be subtracted from the nth price to come up with the 1 week, 2 week, 4 week change.

Example:
478.80 - 483.60
478.80 - 459.69
478.80 - 451.09

Repeat for each of the 4,000 prices that are loaded into the database. And I need to do this for each new Thing these prices are for.

But the only way I know how to let the workflow know the correct order (which price is item # 1 vs item # 7) is to send the prices as a list sorted by date. Here’s the backend workflow (BEWF) I currently have setup:

This is only for initial upload of the Thing and its prices. It’s 15-20 years of daily prices that’s why it’s such a big list. After initial upload, daily updates as new prices come in are not an issue.

I’m looking for an alternative way to process these initial calculations since a recursive workflow is so WU intensive especially with this many records.

Hey @telaholcomb

I wrote an step by step for help you. I think that can be good for your problem.

Since processing 4,000+ prices in a recursive workflow is too WU-intensive, here’s a more efficient approach:

1. Use Bulk Processing with API Workflows (Instead of Recursion)

  • Go to Backend Workflows and create a new API workflow (e.g., "Batch Price Calculation").
  • Set it to process a batch of 200 records at a time instead of all at once.
  • Use “Schedule API Workflow on a List” to process these smaller chunks sequentially.
  • This reduces WU usage compared to fully recursive workflows.

2. Precompute & Store Price Differences

  • Instead of calculating differences in real-time, create a new field or datatype to store precomputed values.
  • When new data is uploaded, run the calculations once and save them for quick retrieval later.

3. Offload Heavy Calculations to External Tools (Most Efficient for Large Datasets)

If the dataset is too large for Bubble to handle efficiently:

  • Export data to Google Sheets, Airtable, or a database (e.g., PostgreSQL, MySQL).
  • Use a script (Python, Google Apps Script, or Make.com/Zapier) to process price differences automatically.
  • Re-import the processed data via Bubble’s CSV upload or API.

4. Optimize Data Sorting

  • Ensure your prices are sorted by date before processing.
  • Store a unique Index ID for each price entry to reference previous records more efficiently.

Best Approach for You?

  • If Bubble-only: Use batch API workflows with precomputed values.
  • If performance is key: Offload calculations to external tools and sync results back into Bubble.