Backend Workflow Optimisation

Hi everyone,

I have a back-end workflow that I run on an ad-hoc basis to update a list of records I have stored in the database. It’s a relatively modest list (around 160 records) that will hopefully grow over the next few months into the thousands. For each record, the workflow updates around 60 key fields, some of which have conditions.

There are times where the workflow randomly stops. I check both server and capacity logs and see no real issue. Average CPU usage is low and maximum capacity is never hit.

One thing I currently do when the workflow randomly stops working, is to add a ‘modified date’ parameter to update those records where the modified date is earlier than the most recent workflow attempt. That way the workflow will continue running from those records that need updating and will avoid the ones it had updated before the mysterious bug kicked in. Nonetheless, I wish there was a speedy, trustworthy, and more efficient way to run the workflow to avoid this.

I worry that as the list of records grow, it will become an issue to run the workflow and problematic to identify a bug. I was therefore wondering if anyone has tips/ideas on efficient workflow structures for workflow of this type/magnitude?


1 Like

If your workflows got ‘kicked’, you have somewhere the ERROR in the server logs.
It happened also when some APIs got stuck. There’s also the ‘time out’ thing… if the delay is too long. It is better to ‘loop’ with an index counter than making a list of things.

1 Like

to clarify Johns answer above, maybe you are already doing it…
use a recursive workflow instead of “schedule api workflow on a list of things”
more info in this video:

done these with 10,000s of records without an issue.


Confirming what @TipLister said above - you’ll want to make sure to use “Schedule an API Workflow” instead of “Schedule an API workflow on a list” for the most scalable approach.

Make sure your condition on the recursive action is sound, and actually stops the workflow at a certain point.

If the above two are true, your workflow should never stop running randomly - if it does, I’d suggest what @JohnMark mentioned - this could be due to a timed out API call in the workflow somewhere - for example, you mentioned you are updating 60 fields - if one of those dynamic expressions calls an API to get data, and that call doesn’t work successfully on a single item, it has the potential to kill the workflow.

I hope this helps!


Thank you all especially @TipLister - the video was incredibly helpful, immensely appreciate it. Certainly less nerve-racking knowing there’s a more reliable and scalable approach. Thank you again!


glad for your kind words sir!

This topic was automatically closed after 70 days. New replies are no longer allowed.