Why do APIs on Bubble just "give up" when you need more capacity?

I don’t have a lot of experience with web development or database, but I’ve been using Bubble for a while and am starting to use a lot of backend API workflows.

Does anyone else think it’s a little insane that when you delete a list of things, it can just give up after the first couple hundred items? Isn’t the idea of a computer you tell it to do something, and it does it? If Bubble is concerned about your capacity why doesn’t it at least delay the request and spread it out? Instead it just drops the whole task and doesn’t even warn you in anyway other than burying it in the server logs.

Instead I tried resorting to recursive workflows, and now the whole process runs really slow and I can see my apps CPU usage not even hitting 1%. But if I do multiple tasks in the workflow, and it starts the API workflow again the capacity can actually get too high again. Plus capacity apparently depends on Bubble’s own CPU usage depending on the time of day so it’s not even reliable.

Shouldn’t the “delete a list” action for example, just do it’s own internal recursive workflow at max speed, then auto-slow down (not completely give up) if there’s other CPU usage needed somewhere else in the app? Then it can guarantee the task actually gets done without "giving up?

This is a bit of a rant, but it blows my mind that an app built around a database can just forget about the task you gave it because someone else’s app also needs CPU time.

2 Likes

I had experienced something similar to this when building the last update for Better Uploader. If I hit 50 upload content server requests at once it would just give up without returning any errors. However, in my code I changed the synchronous function to an asynchronous await function that waits for each upload to finish, and then uploads the next. In my case, this didn’t affect performance since the files were allowed uploading to bubble are so small anyways and it’ll only handle one file at a time anyways.

There’s no documentation on this but I’ll suggest this to you. Remember, the server would give up only if I sent it 50 requests. Give this a try: hit it 49 times, then wait, and then hit it another 49 times and so on. I’m not sure how you’ll end up implementing this, but if this is the same limitation then it should work theoretically. Also, keep in mind this was in a plug-in environment so who knows if the limitation is the same.

Try it and get back to me because I’m really curious.

All the best

1 Like

That would be a good feature request Tyler.

It is the giving up part that frustrates me. A platform that claims to be intuitive and robust should not just give up or stop. We are building production systems here, stopping can have severe implications.

2 Likes

Yea it’s really confusing to me, the API will throw an error saying “App too busy” then when it kills the process my apps usage goes to 0% usage. Which would’ve been a perfect time to just continue the task…

2 Likes

I read the recently July 2022 update, and they mentioned an overhaul of bulk data manipulation in the backend is in final stages of testing so I’m crossing my fingers.

2 Likes

Definitely curious about this overhaul. All in all they are doing good things, the speed with which they release new, very useful, features has gone up a lot in the last 6 to 8 months.

1 Like