@petter mentions in his excellent backend database trigger tutorial that one should be careful with condition complexity, because any change to any Thing of that trigger’s Data Type will cause the trigger to check the condition and consume server capacity.
However I have multiple database triggers on the same data type, where only one of the events will execute based on conditions:
In the server logs, every time one of these database Things changes, I would expect to see logs of both “Event condition passed” and “Event condition failed” – but for database trigger events I only see the “passed” events.
This differs from, for example, other Actions in the logs that show both “passed” and “failed” actions, based on conditions. (I made sure the advanced server log setting “Event condition not passed” is checked, but the failed database trigger events still do not appear in the log.)
Is best practice with database triggers to only have one trigger per data type to reduce server capacity usage? Should I restructure my database triggers to only have one trigger per data type, or is my current setup OK?
This leads to my bigger question:
If I don’t see an event in the server logs, does that it mean the event did not consume my available capacity?
Or are there server events occurring that are being counted toward my capacity which are not showing up in the logs?
I’ll put in my five cents here. Generally I’d say that anything that has to be checked on a server level will obviously eat up some capacity. Whether it happens across two different triggers, or one trigger with conditional actions inside of it probably doesn’t make a huge difference. It’s the total amount of calculations needed that matters.
That being said, we don’t know how this all happens on the server level. What we see in the Bubble editor is more or less a dumbed down explanation for what exactly a process is going to do, but it doesn’t tell us how it’s doing it. There could be (and probably is) a lot of optimisation going on under the hood, meaning that Triggers may not run exactly as they look in the editor, in the same way that Bubble will index specific data types for faster searching without telling us.
For your last question, my hunch is no, it still spends some capacity to check as otherwise it would never run. So I’d say it’s not really a yes/no question, but more a question of just how much capacity it spends getting to that no. And that’s a question for Bubble’s engineers. So I can’t really provide a best practice here (others may feel differently and are welcome to contribute), other than to say that I’m rarely concerned myself about the capacity that triggers spend, as I’ve never really seen it make much of a difference on the total spending on the charts. In my view their usefulness far outweighs their potential capacity spending.
Obviously that comes with the caveat that common sense needs to be used: the total spending of a trigger is basically 1) the complexity of the trigger * 2) the number of times it needs to run. So a complex trigger that is checked millions of times per hour is obviously not going to be very performant.
I know @NigelG has had a bit more critical view of backend triggers and their reliability: I have found them to be pretty reliable myself, but would love to hear your thoughts Nigel: I’m guessing in cases where you’ve found them unreliable, it’s been related to capacity somehow. Not sure if this is still your view or if Bubble have addressed the issues you were experiencing.
This topic was automatically closed after 70 days. New replies are no longer allowed.