Best Way to Handle 100k+ Data in Bubble?

Hi everyone,

We’re building a financial tool using Bubble as the frontend and our own backend API to pull transaction data from QuickBooks.

Each company can have more than 100,000 transactions and we need to:

  • Display all of them in a repeating group
  • Allow inline editing for each field

We’re currently pulling data from our own FastAPI and writing data into Bubble’s internal DB using the Data API so we can support easy editing and real-time updates. Also this save some work unit by Bubble handle all data pulling and writing. Using an external DB could make the UI experience clunky — updates don’t reflect unless we refresh the page, which breaks the workflow.

The issue: Even uploading via Bubble’s Data API, writing 100k records costs around 400,000 workload units, which isn’t scalable.

Just wondering — has anyone run into a similar situation?

  • Are there any best practices for handling large datasets in Bubble?
  • Has anyone successfully used an external DB + custom frontend logic to allow smooth edits?
  • Any creative strategies to reduce workload costs while keeping things responsive?

Would love to hear your thoughts or experiences!
Thanks in advance :folded_hands:

I asked our support bot, this is what it offered:
Data Loading Strategies

  • Avoid loading all data at once - Instead of fetching 100k records on page load, implement pagination or load data as needed
  • Fetch only essential data initially - Load simple data types first, then fetch additional details when required
  • Keep sorting/filtering at database level - This is much more efficient than client-side manipulation

Backend Processing

  • Move heavy operations to API workflows - Use backend workflows for bulk operations to avoid frontend slowdowns
  • Use “Schedule API Workflow on a list” instead of “Make changes to a list” for large datasets - This prevents timeouts by processing items separately
  • Schedule expensive calculations behind-the-scenes - Run heavy queries in scheduled workflows and save results for later use

Workload Optimization

  • Batch API requests with delays to avoid server overloads
  • Optimize bulk processes - Pass lists as parameters instead of repeating searches, and minimize repeated actions
  • Consider recursive workflows for processing large lists efficiently

For inline editing of large datasets, focus on updating only changed records and use backend workflows to handle the processing efficiently.

Edit:
But please, I hope other users share their own advice. Don’t take what the AI said as the know all solution. Sometimes it misses the mark.

2 Likes

I was interested in the question about creative strategies to reduce WU costs and put together a video.

It uses my plugin Data Jedi and I was able to fetch, modify and save 51,200 items for less than 45 WUs and it took less than 5 minutes on the client device. A little bit of a wait, but it is creative I think.

5 Likes

I love Bubble but if you’re not using Bubble’s backend, just use WeWeb! It’s a better front-end.

Is there a particular reason you’ve decided to use Bubble for this task? It’s a great tool for so many things, but not everything of course.

1 Like

Thank you for the response! However, avoiding all data loading or loading it separately doesn’t fully address our issue. Ultimately, we still need to load the entire dataset, as we’re building an AI system that requires all historical data for training purposes.

Thanks I’ll take a look at this plugin, this is a brand new idea for managing data

1 Like

One good way to get that data to the AI is using some of the features in the plugin. When doing the export, the plugin element has an exposed value of base64 encoded file that is just text string that can get sent via API to the AI for training on the data.

@georgecollier We’ve been using Bubble to build quick MVPs for the past two years—it was easy to get started and worked well when we had a small amount of data. That was the case until we hit scalability issues with our current project.

We’re now evaluating other tools as well, and interestingly, someone else also recommended WeWeb to us. It seems promising, especially with its ability to connect directly to external databases, which could help with scalability. My only concern is the learning curve of picking up a new tech stack.

Curious to hear your thoughts—how has your experience with WeWeb compared to Bubble? I’d really appreciate your insights from working with both.

Are you trying to label data for AI training?

Yes kind of. We ended up migrated to Next.js with Supabase and rebuilt the app to make it more scalable and maintainable.

1 Like