Why don't we stop accounting WUs for crawler and bots?

Hey everyone,
We’d like to suggest that Bubble excludes search engines, crawlers and bots from workload unit consumption if possible.

I initially thought these were already excluded but it seems like it’s not according to a recent contact with Bubble’s support.

I don’t have exact statistics (feel free to share!) about the impact of search engines and crawlers visits on WU consumption but as they probably consume page load WU, they could potentially add up quite a lot.
I’m also not sure how easy it would be to implement, but I’m assuming these bots mostly use specific user agents that could be detected?

Here’s the ideaboard post if you want to upvote it: Bubble | Build Powerful Full-Stack Apps Without Code



I think it would be reasonable to exclude crawlers and search bots

1 Like

In one of our apps we have been able to figure out that on any given day we had 322K page loads without JS being loaded and 55K that had. Indicating a large number of bots/crawlers visiting the app. This was right after launch and we already see this number going down. So the question is, does bubble spend resources to serve pages to the bots or not. then a conversation about page load cost can make sense.

Fortunately, all my web applications are not the companies’ main pages (these are usually created with Wordpress) but rather independent systems allocated in subdomains, difficult to find in searches (because I don’t even index them).

But this is something that needs to be reviewed. For sure.

Interesting. I’m guessing it comes from the “type” of the bot. I can imagine some bots need JS rendering (to calculate LCP for example) or not (for SEO?)