I am not sure if this is really a bug or not. I am processing a change on a large list (+5K records) with a condition. After having the workflow start it stops after having completed some but not all changes.
This is because the action takes too long for the workflow to complete (probably because the list is too long). You can try adding some capacity and see if that speeds up the process (though at some point if the list is huge it just takes a long time…)
I have added a temporary speed boost. Used all allowed units. Still did not work.
Do you think this is something that we can (as a user) have a control over? You know since its our capacity that we are utilizing.
In the meantime i have scheduled a workflow on a a list and increased the interval time to 5 sec. It getting the job done but its taking forever
We need to have operations timeout after a few minutes for the health of the system, but as you can control the length of the list, you can do it by smaller chunks, etc.
I’m seeing a similar issue, @emmanuel… a list of 3000 results is returned by an API call in a few seconds, but even trying to do “:items until #10:count” causes the operation to timeout… am I unable to break up the list into smaller chunks after the results are returned, or is this unique to my case? Thanks
EDIT: I found an API call that returns the same data in 30 results instead of 3000 and it works much better… good to know for the future.