Schedule Backend Workflow on List - Delete List of Things

I’m having a very hard time with the logs to try and find the last item that ran successfully so as to find a log indicating why it is that the schedule backend workflow on a list is failing.


I’m providing the list of unique call SID to the schedule backend workflow on a list, which has around 780 items. Then the backend workflow itself has just the single action to delete the list of items in the DB whose unique call SID matches the parameter, so to delete all except 1.

This is failing, and the logs I can not find anything indicating why using any of the advanced filters. As it is large as well and the logs are so terrible I can not scroll to the bottom as when scrolling through hundreds of logs, and the way the time filter works, I will never reach the last item as eventually scrolling causes the time filter to malfunction and on scroll the logs go blank as it has passed the time frame of the logs filter.

How can I search the logs to find which item was the last to successfully run, and also the error reporting why the backend workflow on a list failed to run on all items in the list. I understand there may be a timeout issue, that a backend workflow can only run for 5 minutes in total, but this backend workflow on a list fails before 5 minutes, it seems to fail within just 2 minutes.

the best way to use the logs is by timestamp… but you have to also wait until up to 5 minutes after something happens for it to show in the logs lol

if you get the timestamp (log current date or use last modified)

just add it to the logs and then set the end to +1 minute
pretty much all operations will finish in under a minute unless you are doing so heavy recursion.

The logs are still laughably bad though… only giving a tiny fraction of the data you actually need to debug something.

I usually just log things by creating data entries for a workflow I’m debugging (so I can easily expose formula evaluations etc and narrow down the fault). Debugging anything in the backend is can be a black box and huge time suck.

In your example I’d start wit a custom log of the list of the things ids, plus a count to ensure I’m passing the expected list into the delete. Then I’d log how many get deleted - maybe there’s a hard limit of 100 or something. I might also log start and end time of the workflow.

You’re also doing the schedule api workflow on a list from the front end… it would be much more performant to call 1 api workflow in the backend and then do the schedule on list in the backend especially if the list is large.

Generally with bulk delete/bulk edit I either do it for each record OR I do it in batches of 100 since I know the delete and change things are unreliable on large lists. Sets of 100 is always reliable.

To do that I either pass the full list into the workflow and just minus off the items (not really ideal since bubble carries the full object, could optimize with just ids). Or I use a search up to 100 and then end the recursion when results is less than 100. You’ll also need to ensure the recursion doesn’t double up on itself otherwise you’ll get a bunch of search and deletes happening at the same time which will be excessive WUs and iterations.

Thanks for that…it literally never occurred to me that perhaps instead of using 5 minutes ago, that the timestamp would sort of ‘lock in’ the time filter; makes complete sense.

I do not think there is. I saw very odd behavior in terms of number. Started with list of 1,570 with first run deleting around half. Next run had 780 and deleted around half again. Next run was 380, then 180, then 70, then 27, then 9, then 2, and then finally on last run it had deleted all.

So a very buggy experience as there seemed to be no rhyme or reason to why it failed or how many it successfully got through before failure. In all situations the data load was the same so, absolutely no reason why the first run could do nearly 800 but all subsequent runs did dramatically less as it would not have been due to a timeout issue. I also saw in logs that it likely wasn’t a timeout issue as there were only around 2-3 minutes of logs, but if timeout occurred and it is true timeouts for backend workflows are 5 minutes, I would have expected to see 5 minutes worth of logs.

Strange behavior.

Are you recursively deleting? It could be a race condition.

I had something similar when I was doing a delete recursively where I would schedule the same workflow again but it scheduled and ran before the first actually completed and that caused issues with the search and delete.

Very odd that it deletes half roughly each time. Sounds like a bug.

no, it is a backend workflow that has one action, to delete a list of things, and the backend workflow is scheduled as schedule backend workflow on a list. The list is text, which are IDs of items that are duplicated in DB (so multiple unique ids but same ID from my source on a field, so I can easily see duplicates)…the action of delete a list of things is ‘do a search for custom data type whose ID field is the ‘text’’ coming from the list of IDs sent in via the Schedule backend workflow on a list. None of the things are going to have race conditions.

Personally I’d log the data coming into the backend workflow and then cross check it. Then log the search before it’s deleted.

That should reveal the break in the logic chain. I had an issue recently where AI was mixing up numbers in the ids - it would randomly change 2 numbers to +1 of the original number. The only way I was able to see it was to log input and output and do a comparison. At first glance the qty of ids and the length looked correct but AI had fudged the data on a few randomly.

In cases like these I just increasing granulate my logs until I find the issue.

That’s how on several occasions recently I’ve found core faults in the bubble editor - everything was correct at every step in my logic but the front end or backend was evaluating incorrectly.

1 Like