Forum Academy Marketplace Showcase Pricing Features

Possibility of atomicity or exclusivity or lock on data operations?


I have a database trigger which has couple of steps in it. It gets triggered when a wallet top-up entry is created. But if two entries are created very close to each other, the two triggers also happen nearly at the same time.

Now since the trigger has couple of steps in it, it may so happen that Step 1 of each trigger is executed first and then step 2 of both are executed. This causes some data inconsistencies as in step 1 I update one data and in step 2 I update another data but using the data updated in step 1.

Here’s pictorial description of this.
Screen Shot 2021-11-25 at 11.14.48 PM

So, say balance before the top-up was x and the two top-ups that are supposed to happen are a and b. So total balance at the end should be x+a+b, but it is being either x+a or x+b as both the operations are taking ‘x’ as the balance. This is because the database trigger has a few steps for calculations and in first step they note down the balance. Now if both triggers are running in parallel, both the triggers take ‘x’ as the initial balance and update it.

I have worked around it for now by making the second step another API call and adding a delay in invoking it (as I can’t add delay in backend workflows), but is there a better solution here that assures atomicity of operations or if I can lock it such that only one trigger can run at a time?


Hi Mukesh,

Two possibilities, both of which are mentioned in the documentation about workflow execution rules

  • Make sure the conditional update references the Result of [a prior step] (which would perform the non-conditional update).

  • Put each update operation into its own Custom Event (basically, a Bubble “function”) and invoke each from the same workflow. (Custom events execute sequentially.)


Thanks @sudsy for the response. I think you have misunderstood me. I am aware of this documentation.

I am not saying that Step 2 in one trigger is getting executed before Step1. I am saying that since the same trigger is running in parallel, data inconsistency is being caused.

So either of the two below mentioned scenarios are happening:

Scenario 1:

Step1 of of Trigger A (Update User balance to x+a)
Step 1 of Trigger B (Update User balance to x+a+b)
Step 2 of Trigger B (Access user balance and update it in TxnB)
Step 2 of Trigger A (Access user balance and update it in TxnA)

Now you see TxnA ends up having wrong balance as x+a+b while it should have been x+a in that.

Scenario 2:

Step1 of of Trigger A (Update User balance to x+a)
Step 1 of Trigger B (Update User balance to x+b)
Step 2 of Trigger B (Access user balance and update it in TxnB)
Step 2 of Trigger A (Access user balance and update it in TxnA)

Now x+b becomes the final balance everywhere instead of x+a+b

I don’t understand why that’s necessary. Is there no way to refactor the logic such that 2 instances don’t need to be run in parallel? Can’t the logic in the 2 blue symbols be combined?

If you have screenshots of the actual workflows, the additional context might be helpful.

Like I mentioned in my first post, I have worked it around by creating second workflow an API and by adding an artificial delay so that they don’t run in parallel. Question that I am asking is for me to build right solution. Here in this case I noticed this issue by chance, but it might not be in some other case.

The logic of two blue boxes is actually in one workflow only. First one blue box is executed and then second. But all that happens so quickly that it could still cause these kind of mix-ups.

However, two triggers running in parallel could still be a reality in general, right? e.g. What if user makes payments from two places together or somehow payment gateway calls our webhook for two payments at the same time, or in any other case of database trigger happening because of two independent actions. Say two users sign-up at same time and if we assign a serial number ID to them.

I understand, but unfortunately, I still don’t fully understand your current implementation.

What exactly are the “blue boxes”? Are they custom events? Actions? Bubble terminology would be helpful.

If they’re just actions that create a thing, then I don’t understand why the “Add complementary” step can’t be placed after the “Add wallet top-up” step with the former referencing the Result of the latter.

Sure. Here’s the webhook workflow.

Step 2 and Step 4 in this are the two blue boxes.

And here is how Step 4 API workflow is

These are not single action workflows because some calculation, checking etc has to be done.

Regarding the first screenshot…

Have you tried placing a condition on step 4 that says “Only when result of step 2 is not empty”?

That should force step 4 to wait until step 2 is complete before scheduling the workflow.

It has that condition. But the point is that both of those actions (step 2 and 4) create a new object of a data type called “wallet transaction”. Creation of wallet transaction triggers the DB trigger. Now even if step 4 is happening after step 2, the DB trigger may actually happen nearly at the same time.

Like I said earlier, it is not that step 4 is happening before or at the same time as step 2. It is that Step 2 and Step 4 both induce a database trigger event which end up happening at the same time.

@mghatiya I assume the wallet transaction data type has a wallet field or at least a user field, right? If so, why not create a field on the user or wallet data type to lock the record while the DB trigger is performing the update?

You can create two custom workflows:

When control record is unlocked:
Lock control record, update wallet, update wallet transaction data type to mark it as processed, then unlock control record

When control record is locked:
wait 5 seconds, trigger workflow above, wait 5 seconds, reschedule this same workflow if relevant wallet transaction’s status is not processed

Now just put these two custom workflows in your DB trigger with conditionals based on the control record’s lock status. Hope that makes sense and I understood your question properly!

Thanks Alex. Yes, what you have suggested would work fine I think. I would have resorted to that if in my case the delay was not an option. Here since I could simply add a delay I was fine.

However, I am curious to know if Bubble provides an in built way of building this lock or atomicity (or for assuring serial execution of triggers instead of allowing them to be in parallel). Having to create my own lock, with the retry after 5 seconds etc is an overhead I would ideally want to avoid. There may be a bug in my implementation that may cause it to run in infinite loop. As it is my application has become quite complex and it is hard for me to visualise and debug.

This is like how we create recursive workflows to run something on a list. We need to create those special logic, APIs etc and those incrementally keep making things more and more complex.

Can the logic be refactored to not rely on database triggers at all? That way, you’d have explicit control over when the balance gets updated.

The logic that currently comprises the database trigger could become a custom event that’s explicitly invoked at the appropriate times instead of basically happening as a “side effect”.

Thing is wallet transactions are created throughout the application in many contexts, when people top-up, when manually it is added, when various incentives are given etc. So, I wanted to take advantage of triggers so that I avoid duplication of code. That way I don’t have to keep writing this code everywhere. Also I know for sure that whenever wallet transaction is created I need to do that.

By using triggers, I can assure that if I have to make any change tomorrow I can make that change at just one place.

And like I said earlier, my intention here is not to find a solution or work around in this case. That I have already found. But I am looking for a more robust and cleaner approach if Bubble supports some. There can be many use-cases where triggers may run in parallel or different users may do something that can change and access common data points.

We need a way where we can tell before beginning of a workflow/trigger/api that so and so data has to be locked while this is happening so that other operations wait for it to get over before executing themselves.

I completely understand and agree with that philosophy. It’s one that I share, and I try hard to simplify and eliminate redundant code; but a custom event would still keep the shared logic “centralized”. Sure, the event itself would have to be explicitly invoked where needed, but it’s only the custom event that would need to be changed if the “balance update” logic needed to be tweaked.

I appreciate you bringing this issue to light, though, as it will certainly prompt me to more carefully and critically consider how and when I employ DB triggers in the future. And to be honest, based on what I now know, while I can see DB triggers being very useful for triggering external events, I don’t think I’d use them in a situation like yours.

If you’d like to see changes/improvements to the way DB triggers work (such as a locking mechanism), then submitting a feature request or bug report is probably the best option.

BTW, as I understand, a DB trigger can’t be triggered from within itself, so if you can figure out how to refactor the logic such that all the critical calculations happen within a single DB trigger workflow, then that might be an option.

Anyway, good luck, and thanks for raising this issue!

1 Like

Custom event also won’t be the solution, as my action of “create wallet transaction” sometimes happens in front-end workflows and sometimes in backend workflows. So wherever it is in frontend workflow, I would have to put that custom event on those pages too. Or I’ll have to create a reusable element and put that custom event there. Or I’ll have to create an API. All of these are more effort and add complexity to the code, so I chose the option of trigger for what I could de-duplicate.

One more thing I would like to highlight is that while I have mentioned this in context of triggers, this issue can occur in any other workflow/API etc too as long as they happen to access common data point.

Thanks Steve for taking time to understand my issue and sharing your thoughts.

So it sounds like there should be a “Create wallet transaction” back-end workflow (which of course can be invoked from either the front- or back-end). It would handle creating the object as well as updating the balance.

I guess so, as long as the events involve “write” operations happening close enough together.

Out of curiosity, how long is the delay in your work-around?

Yes, you are right. That could and should have been the case. But that required creating an API where I pass different objects and parameters etc so I just avoided it. I generally create APIs where I can just pass things. But you see even doing what you told would result into two API workflows getting invoked in my case and the collision issue that I am talking about can still occur.

I have added a delay of 30 seconds in my case. Hoping it would work. Seems to be fine at small scale that I am at. Not sure of what would happen when we scale up.