Updating record not reliable?

Hi all,

I’m having trouble getting a record to update reliably.

I want to try and work out when an API workflow on a list has finished before I start on another workflow which depends on the first one having been completed.

To try and do this, I created a data type called log, which amongst other things has total records and completed records fields.

I have an API workflow which does the following:

  • Creates a new log entry
  • Updates the total records to be processed field and sets the completed to 0.
  • Schedule API Workflow on a list of the things to be processed. I pass the log record as a parameter to the workflow on a list

The API Workflow on a list does the following:

  • Creates a new record of the type of thing being processed
  • Increments the log’s completed field by 1
  • If the total field = completed field, set another field called finished to “yes”
  • (Then hopefully I can do other stuff based on the fact that it is finished)

Sometimes this works fine, but sometimes the increment fails either on the second to last record being processed or sometimes a few records earlier. Since it does work correctly sometimes, I’m at a loss to work out why it’s failing. By the way, the records being created by the workflow on a list are created correctly every time.

I’ve tried various times for the workflow on a list interval, but need something which is reliable even if it has to run slowly.

I saw this post from a while ago, which might have something to do with it:

Any suggestions for a reliable method to do this would be much appreciated

Cheers, Andrew

1 Like

Sounds like a race condition – two or more actions executing at the same time. Can easily happen when using the Schedule an API workflow on a list action. In your case, two actions pull the same completed value, and both increment to the same value.

If making the interval larger on the list processing doesn’t help, maybe you can peel off the action that increments the log into it’s own api, and space them out at random intervals so as to reduce the possibility of them executing at the same time?

Here’s a post on what that might look like: Missing things omitted from list race condition

I think there are also some plugins that can generate the random number instead of using the database.

1 Like

Thanks for that. A race condition sounds like it could be the culprit.

I will try playing around with intervals, and will also look at using a different approach that only uses creating records, rather than updating a shared record. Perhaps I can use another scheduled workflow to check whether a repeating workflow has finished or not.

I’d rather not manually show down execution with excessively large intervals, because I might then upgrade capacity and have workflows running slower than they are capable of.

Either way I need a robust and reliable solution which will work cleanly even if the system is very busy.

Hey Andrew:

Unfortunately the problem that you’re encountering (at least in my experience, as outlined in the post of mine which your shared here) likely goes beyond an easy fix. Capacity issues seem generally to lie at the heart of this, and while increasing your capacity can reduce or majorly eliminate it, it’s not a guaranteed solution.

I tried a few project-specific workarounds that did get me some positive results, however (as above) it didn’t guarantee that the issues wouldn’t affect my users;–– essentially capacity is just a tricky thing. This is especially true when the workflow involves multiple list-based actions.

Ultimately I had to go back to the drawing board on this one. In my case I was able to create a list field capable of containing the data from the external API, rather than using an API workflow to create each new item as a separate thing.

In your case however one of my initial workarounds may be useful…

You can create a specialized intermediate page and workflow which, based on any number of potential conditions (e.g. every 1 minute, when a certain number of records are completed, etc), causes the page to refresh. If done correctly, this can force the API workflow to pick-up where it left-off by continuing to refresh the specialized page until your prescribed condition for success is met (e.g. when the log entries indicate that the total records remaining to be processed is 0).

It’s still not necessarily a universal solution, but may be helpful given the structure you’ve created for the data in your app.

Interesting. I’ll give it a try.

I’m surprised that this is turning out so difficult. Given that Bubble is so API-centric and that calls to hierarchical information from an API are asynchronous, surely this is a common issue?

A possible solution would be for Bubble to generate an event once the processes generated by an API workflow on a list have been completed/ Then you could just run the next thing on the agenda once the event is fired.

I’ll report back if I make any progress :slight_smile:

1 Like