As others have noted, API workflow actions frequently fail when the same record is being updated by multiple workflows/actions at once.
The ability for a workflow to reschedule itself seems to have helped with this, but what can still happen is the workflow can be triggered by a user multiple times in quick succession, which causes problems.
For example, when a volunteer is approved for a volunteering opportunity, I have an API workflow that creates their schedule one date at a time. It works great, but if that volunteer is approved for 3 different opportunities in a row, that workflow will be triggered 3 times in a row, which is going to cause many actions to fail when they bump into each other.
So my question is: how do you deal with this?
My first idea is whenever that API workflow is triggered, to add the workflow ID to the user. Then if the workflow is triggered again when the user contains a workflow ID, I could schedule the new workflow for 5 minutes in the future. If the user contains 2 workflow IDs, I could delay it 10 minutes. Not sure if this is possible, but itâs my first idea. Do you have a better one?
Thanks, but no, because I need to allow the person whoâs approving these volunteers to do it all 1 after the other rather than saying âokay, youâve approved 1 volunteer, but you have to wait to approve the next oneâ.
My solution, which I think still leaves a little room for error but I havenât been able to break it yet, and is similar to what I described above but even a bit simpler, is to use a go-between API workflow as follows:
User clicks approve button.
The volunteer offer is tagged as âUpdatingâ and then the go-between API workflow is triggered.
The go-between API workflow has 3 actions. Thereâs an action that triggers the actual workflow right away, but only if the user who made the volunteer offer only has 1 volunteer offer tagged as âUpdatingâ. If the user has 2 volunteer offers tagged as âUpdatingâ, the actual workflow is triggered in 5 minutes. If 3 volunteer offers tagged as âUpdatingâ, then 10 minutes. (I actually did this in reverse order, so first it checks for 3, then 2, then 1). Of course, I tag each volunteer offer as âDone Updatingâ after the actual workflow has done its business.
So what happens is the first time the button is clicked, the actual workflow gets triggered right away. The 2nd time the button is clicked, if the 1st volunteer offer is still tagged âUpdatingâ, the actual workflow gets triggered in 5 minutes. And so on.
Okay, so my solution up there wasnât great in the end, but Iâve come up with a better one.
Iâve made a video in case it will be helpful to someone. Itâs a rather confusing video because the solution is just so complicated and I didnât want to bore you with a 20-minute video explaining every detail, but hopefully it gives you a general idea of the process:
For the _times2, _times3, _times4 endpoints, it looks like you are using them to manage groups of step, which conditionally run. Are you intending them to run in sequence? This is what Iâd use Custom Events for, as Schedule runs asynchronously, i.e. runs them at the same time as carrying on the current workflow.
I was thinking about your comment on potentially having to wait an hour for 20 sets of processes.
How about the _times0 âmanager workflowâ only invoke _times1 if there is more entries in the queue, AND if the last item that has been updated happened more than 60 second ago.
Then the _times1 âprocessing workflowâ can do its updates, and recursively reschedule itself for more queue entries, and not get re-invoked if active.
Thanks @mishav, yes, Iâm intending to run them in sequence, which they are because of how Iâve set up the âOnly whenâ statements in each, i.e. âwhen there are more volunteer offers to update, schedule the same workflow againâ and then âwhen there are NO more volunteer offers to update, schedule the next workflow.â Iâve never used custom events, so Iâm not sure if they would be better?