Pretty stumped, hoping someone might be able to crack this…
Trying to work out of these’s any know issues with a bug I’m experiencing. Basically, inside an API Workflow I am creating a thing (“Service Line Item”), and then adding it to a list [of “Service Line Items”] on another datatype. My issue is that although my “Service Line Items” are always created, only a few end up being added to the List.
The API Workflow takes in a ‘Service Instance’ and a list of ‘Task Types’. Here’s an example of where I call it:
It’s probably a race conditions issue - where the service_instance item is being read/updated multiple times (at more or less the same time) leading to data inconstancies.
Better to use a recursive WF for this.
Alternatively, don’t add the line items to the service_instance at all (you’re already linking them the other way, so there’s really no need to add them to a list field as well).
That is strange @thethinklab.au … I’ve usually only seen race conditions like this on much larger data sets, and usually before the improvements Bubble made to “on a list” actions last year. Something strange is going on. You could also check the Logs and see if anything odd stands out.
Agree with Adam that a recursive API workflow can give a definitive solution to this issue, as it forces the workflow to run sequentially (one at a time) eliminating any issue caused by concurrent execution. Also, it’s not necessary to have both data types referencing each other.
I’m not sure there’s anything ‘strange’ going on here—this behaviour is actually to be expected.
In fact, if anything, the recent improvements in speed when running workflows on a list actually increase the likelihood of race condition issues like this - as has been discussed elsewhere on the forum when those speed enhancements were first introduced (it can happen even if there are only 2 items the WF on a list is running on).
As @Baloshi69 suggested, adding a longer delay between runs can help mitigate the issue. However, as with any arbitrary delay, it’s a bit of a balancing act:
If the delay is too short, race conditions may still occur.
If it’s too long, it can slow things down unnecessarily—whether that’s a problem depends on your specific use case.
So a recursive workflow is the best way to avoid this, if it’s 100% necessary to add the items to a list field.
I ended up just adding a backend workflow that adds the Service Line Item to the Service Instance each time it detects one is created. @adamhholmes I recognise the redundancy, however, it made a downstream workflow slightly better by having the list directly on Service Instance. Service Line Items will never change the Service Instance they are related to, so minimal upkeep/maitenance.
Thanks for that @adamhholmes. As I think about it more, I guess if you have multiple workflows trying to update the same field of a record at the same time, this is what causes the race condition, not so much the volume of data.
If you are experiencing race conditions with the creation of things that then later need to be added to another data type list of those things, my preferred method to avoid such race conditions is to use the Bubble bulk create API to create the list of things at once. It will return to you the list of IDs of those newly created things that can be used in subsequent workflow actions as the result of that step. You can then just run an action to update the main thing and set it to ‘add list’ and the value is ‘do a search for’ where the unique id is in the list of the previous step (ie: bulk update api call).
This makes it in my experience the most effective way to mitigate race conditions entirely as there is not a need to try and balance an unknown variable of how much time to delay by, and it reduces the costs of updating the main thing for each recursive backend workflow run, as well as creating the list of things individually and passing those parameters through each recursive run of the backend.
In order to extract from the bulk api call response the list of IDs, you’ll need this regex pattern