My next bill is $676, I have royally messed up one of my backend workflows. Currently at 2.2 Million workload units even though my app shouldn’t be using that much.
The culprit in question is a database trigger event. Where if any new data is imported through an API Call, it checks for a certain data field called OPRID, and checks if it contains the value of “Spin #”, where # is a number between 0 to 9.
The Search for has no constraints or sort by as the find and replace does the job the smoothest. It finds and replaces over thousands of entries too. Just didn’t imagine it would be this many workload units. I can’t change the data coming in unfortunately so this process is a must. Furthermore, have tried different iterations where it finds all the duplicates and deletes all but one (do need help with this, not sure if my tests or trials of it really contributed to the workload usage).
If anyone can explain what is going on and how to stop it would be amazing.
Thanks,
Sarrie
Edit: Here it folks! (Overages are disabled, just reenabled it to show)
Unfortunately can’t change the data coming in, as its hard wired into machinery and would be too expensive to have someone come in and reprogram them as it would be a big job across 3 separate machineries.
I wonder if there is a more appropriate trigger that can be used that wouldn’t flood the workload units though. At the moment, it works as a static database unless a new type of product is created, so this wouldnt happen often, and I would only like to apply it to the new data that has come in rather than the whole database over again.
There is little information to work with here. But if it is lots of data or happens a lot why not use something like Supabase? AI can setup a webhook or api in Supabase which is using Postgres’s as database and you can do whatever you want with your data at 0.001 of the cost you get invoiced with Bubble.
If you need to have some form of data in Bubble you can simply either call the tables in Supabase directly with API or you could setup a view.
in general “data triggers” should be avoided as they do chew up a lot of WUs
any field on the data triggers the data trigger to re-evaluate… so autobinding can be especially costly.
A few ways to solve:
create the thing, then trigger an api workflow for the thing - this way it only evaluates it once
change to a daily data scrub method - search datas that haven’t been scrubbed, then process them in bulk using change list of things. Update the data so it doesn’t match the scrub filter again until needed.
scrub the data before import
scrub the data only when it is needed by the user (and hasn’t been scrubbed) - you may find that you don’t need to scrub all the data but just a smaller percentage
restructure the database so you don’t need to scrub so much data - instead of scrubbing the oprid on the routelink, you could add a field on the oprid that is scrubbed and then just write that instead (not sure about your structure/purpose but hopefully you get the idea)
if you have a data trigger in an app and the data has 20 autobinding fields on a page then that data trigger is evaluated 20 times if the user changes each field
Unless I am misunderstanding this bubble documentation?
Each backend database trigger event evaluated = 0.05
x 20
= 1 wu
you might only need 1 field out of 20 for a data trigger to trigger something and you can just do that when that 1 field changes data by triggering an api workflow
Adding a new item to the API workflow scheduler = 0.1
x 1
= 0.1 wu
if the field that you are using in the data trigger is a less commonly edited one then the wu usage could be amplified higher.
further, if you have backend workflows that are changing fields on that data they will also trigger the data trigger to evaluate. make changes to a list etc can be rather costly with data triggers.
or if you happen to do a bulk update or import to a data then you can quite easily trigger a data trigger thousands of times without realizing it.
update 10,000 datas
0.05 x 10,000
= 500 wu
hence my comment “in general data triggers should be avoided as they do chew up a lot of WUs”
I don’t have a direct solution for your WU woes, but as far as what you’re on the hook for money-wise, you can retroactively add a Workload Tier to your plan, which not only includes additional WU, but reduces the cost of overages.
I based these figures on a total of 2,250,000 total units. It recommend adding Tier 2, which is $99/mo and includes an additional 750K WU. Not only that, but it reduces the cost of overages from $0.30/1K Wu, to $0.14/1K WU, so $185.50 for the additional 1,325000 WU-
These are the totals:
Starter Plan: $32
Workload Tier 2: $99
1.325mil add’l: $185.50
Grand Total: $316.15
(Less than 47% of the original price / almost $360 in savings)
Again, this can be added retroactively, so if you add it to your plan before your plan’s month e ds, it’ll apply and save you some money.
Worth contacting support, explain what you did and the lesson learned. You may find leniency on their behalf if you’ve clearly made a mistake. They may not, of course, give you any kind of credit but it has been known due to genuine human errors.
I did this once, I didn’t take the support credit because once I purchased the retrospective WU package it reduced my WU overages down significantly.
This really isn’t that much. People overestimate the impact of WU costs and underestimate the value in having a well built app that’s modular and maintainable.
I have had a look at Supabase it looks interesting but does look overwhelming and time is a factor for me, would rather try to find another alternative for this one issue in my app, appreciate your input though!
given the original poster used 2.2 million wu from a data triggered workflow that cost $675 usd … I’d say the data trigger is definitely something that can be optimized.
data triggers are regularly overlooked - I’ve often done a bulk update on a data and then realised after checking the reports that I “accidentally” triggered a data triggered workflow to run which ended up consuming far more wu than my intended bulk update.
data triggers are particularly expensive when used a lot in an app as they can cause a waterflow effect if one triggers another data to be updated and that data has a data triggered event… and so on.
I agree that a well built app is modular and maintainable. I just think there are better ways to get there than data triggered events.
For instance I have a “create pop” for most datas and I just put all the “when data is created” logic on the save button. This gets rid of the “data is created” data trigger that a lot of apps use and it’s very modular and maintainable since the logic is in one reuseable that can be used whenever I want to create that data. If I need to run that logic in the pop but also from other places I’ll just move it to a backend workflow and call it when I need it.
I don’t see the logic in checking for one field changing whenever anything changes on the data. much more efficient to only run the logic when it is needed.
there are a lot of different ways to achieve the same end result in bubble and each has pros and cons. I’m not saying you should never use data triggers, I’m just saying that in my experience an api workflow is better in most cases (at least in the apps I usually build).
Hey,
Those sound like great price points for what you’re getting in WU, however my app typically doesn’t use that much WU so would like to save on costs unless it’s absolutely necessary. In terms of redundancy, would’ve been useful for me to have here not realising how many WU I was using with this one backend workflow .
Appreciate your input though and will look towards it in the future if I ever need more WU.
Apologies as I am still a bit new with Bubble.io, but how would I go about approaching this as I do not fully understand. Currently, the data is being imported by an API Call from Postman, the third party software. I was hoping for it to detect changes between the dataset I currently have (about 8,000) entries and the new data being imported. So when I create an action, it only applies to the new data coming in, is it possible to do this with only when conditions of the count before and after, possibly?
What would trigger this though if data triggers take up too much WU?
Unfortunately I can’t do that, the data is part of our offline network with Microsoft Dynamics and can’t change those inputs going into there (For now at least).
This is another thing I am looking into. Currently, the user would need it to match a product (has sharing data fields or data types) it’s tracking with the “routelink” database and let the user trigger it differently instead of a data trigger. Again would need a bit of guidance on this.
The OPRID is a data field with my “routelink” database, It has to be imported exactly with the database coming from our offline network with Microsoft Dynamics for it to work with Postman, at least so far in my experience.
Appreciate your input, it has surely given me some more insight on how to approach this.
I have emailed them and expalined to them the situation already. I did say it was on accident, but I don’t expect leniency, their support is already wonderful with issues I’ve had.
Up to them if they want to be lenient or not, but as I have mentioned here for purchasing the additional WU per month:
So far, my app has been fine with WU usage. This is my first time using a data trigger, however it was my own mistake that caused this. Would you have any insight to share on how I can apply my data changes with the data trigger workflow, but only apply it to the new data coming in rather than the whole db?
Still not very clear to me what exactly you try to do but if you import data and only want to change that data you can for instance on import set a field to “to_process” and in your conditions only apply logic to those and when logic has been applied you set the field to “processed”. Or perhaps better, have a processed_time and have it empty by default, now only process those and when done set a date/time on that field. Use UTC and now whenever you want a report or whatnot you can ask things like “at what time was data x processed and you have it.
you’re likely best to pre-process the data with something like make.com
make.com is an easy drag and drop tool that you can use to connect microsoft to bubble.
to pre-process 10,000 things in make.com you’d pay about $9 usd.
your make scenario would look something like
1.webhook or microsoft data call
2. preprocess/do regex in make.com
3. create data in bubble (could use create in bulk also)
make would replace postman in your tech stack and be fine for most use cases. if you’re doing a lot of api stuff then you’d be better with other solutions but make is likely all you need particularly if you are just starting out on bubble