Hi, there, just reading through this. I tried this approach some time ago and found that the API’s don’t necessarily fire in order. So depending on where it was the ‘remaining number’ was in effect random. I think because the reduction by 1 of the count was relative to the specific API timing.
My workaround was to have a repeat every X seconds to do a search for a Current User time field and count those above a process initiated time until it reaches the expected number and then removes itself from the every X seconds process.
My main issue at the moment is an API that runs on a list of 60 items (each individual API takes 70 seconds according to the logs). The whole 60 takes 15 - 20 minutes at 75% of capacity to complete on the Personal Plan which I believe has 1 unit of capacity.
I think the reason it can take so long is that it’s 60 individual API’s with the lag of in and out. I have tried to batch them on 15 items at a time within a similar API which each action in the workflow runs on 15 items instead of just one, but there is a tendency to time out.
I wonder if running 4 at a time in this way may keep me below the 5 minute time out.
To get the original 60 items I’m creating a stack in the background so time to create them isn’t the issue. It’s the time to update them.