Performance Q&A guide

This is a good piece of advice, I myself regularly do this with all database design (I was a DBA in a previous life) - especially the denormalisation side of things. The main risk is really about keeping any related records consistent, since depending on how you set up the data layer in bubble apps, can be quite difficult and expensive (e.g. using triggers or updating many workflows and events)… and avoid the many-many relationships between tables where you can, as these are VERY costly and will hit limitations in bubble for comparing lists & arrays. Most database engines don’t handle these many-many structures well, which is why we normalise! Ultimately, to achieve this in bubble you usually end up with an X->Y… and Y->X… references, which is messy and convoluted. Avoid at all costs!

I do however find sometimes that certain operations can be much cheaper when done client-side, such as filtering a RG with checkboxes or inputs using filters. If these are applied via the “do a search for” constraints, bubble requests new data… whereas filters use cached data in the browser. In my use cases, this usually results in near-instant searches when structured well, even for relatively large datasets.

I think another key point is don’t necessary only think about the query duration, but consider the latency also - since every request has a round-trip time back to the servers if new data are required, during which time the interface may not be usable or populated. In this context, use the browser developer tools and behave “like a user” while watching the tools. Aim for the fewest requests with as short duration as possible. I like to see requests <250ms, anything above this might make me investigate further with a test page.

The other challenge with performance of the data queries is that the first request after a design change may have inconsistent query duration, while the database cache is “warmed up”. During this time, the database often runs the first query (or few) with minimal optimisation to try and “learn” how best to optimise it for subsequent queries (and cache whatever data might be commonly returned). Performance validation therefore needs to be over several requests, usually as “new” sessions to avoid local cache, then averaged to get a feel for true performance. I also noticed in a recent update to bubble that performance has been improved on unique ids (my guess is index changes!), which I have very much noticed in some of my queries that link using these IDs - e.g. data query times reducing from 125ms to <50ms in some cases for the same query.

1 Like

Do you think that bubble runs on some pay-by-use db like aurora? I thought they run on postgres?
It should be quite simple and cheap to run on AWS? Especially given the fact that most bubble users have very little data (I am guessing 99.9% have less than 1 million rows)

Regarding combined searches and WU. Do combined searches combine WU units as well?