Side note. Most of the complaints here (data-level performance, security) can be addressed by the Bubbler picking up SQL, learning how to model a conventional DB, standing up a custom remote DB node (which honestly is two clicks nowadays in DigitalOcean, Aiven, etc) and utilizing Bubble’s SQL connector component for ALL data bindings. (All CRUD app-wide). This also has the benefit of getting the Bubbler used to how a native-code app would have to be built down the line, should the app become highly successful.
It’s not a golden-egg solution though, as Bubble does not cache the DB conns. Which means the handshake has to be re-established on every call (at a cost of about 400ms). But if you’re doing complex queries that can take mintues on Bubble, I think one may find the benefit far outweighs the cost. There’s also more complex workarounds, such as bundling CRUD into single DB in/out functions, to minimize that handshake cost. It takes about 2-2.5x longer to build a Bubble app this route, and you’ll lose the convenient immutable naming Bubble provides with it’s internal DB, but is much more resilient performance-wise.
On my largest bubble app where i am dealing with billions of records, I early on knew bubble was a problem for that. I stood up large elastic search farms, fronted by Azure function apps.
I then interact with the backend using the API connector in bubble.
BUT. it has been painful over the years as bubble adds overhead to every API call, as it maps it to bubble data structures and tries to do caching.
It has gotten a bit better performance wise, but they don’t allow for me to disable caching and that overhead. I asked many times. That would help.
I did a test using a custom JS plugin i wrote doing direct call and rendering in bubble vs using the API connector to render the same data through bubble. 200ms vs 2 seconds… but of course you lose the great flexibility the bubble offers.
Same, I think it should be a priority but at the end of the day my guess is most companies are still working with really complex and slow systems. I was used to it at my last job, the system was slow but it got the job done. Performance should be a priority, but most of the users don’t notice that much of a difference IMO.
I believe creating/deleting new rows in a database in bulk (> 1000 rows) needs big improvement. Even using recursive backend workflows, the problem isnt resolved. In contrast, modifying fields inside an already existing row (no matter how many rows) is pretty good.
Perhaps as @NigelG mentioned, its better to do the creation/deletion in batches and use a index in the process. But that still takes a lot of time indeed.