So I have found a solution with the help of @chris.williamson1996 and the following post:
For anybody who comes across this I will try to elaborate a little bit more how I did it:
-
I have a new field inside the database that takes in a list of text.
-
In a wf I make changes to this thing:
-
where I search for a specific set of keywords
-
this list is than immediately grouped by it’s identifier (in my case the keyword name itself) and aggregated by number (sorry not visible but just one click):
-
than filtered if a group has more than one item (which exactly means that there is one or more duplicates of this keyword inside one group):
-
In the end I save each identified keyword to the list.
-
Than I run an API call with the list:
-
inside this new API workflow I delete the keyword like this:
-
and than take the first element.
-
after this the workflow schedules itself again as many times as items are on the list. The index token will be +1 each time it reschedules and the condition on API Workflow level will be
index <= maxIndex. And that is also how you know when the last iteration occurs (where I’d suggest to sayindex >= maxIndex).
Be aware that this only removes one duplicate. If you have more than one duplicate of the same thing make sure to catch it also.
This approach does not involve any issues with race-conditions - however being on a wu plan this could come at a higher price than other methods.
Another solution could be to make use of any external service APIs e.g. any model from openAI could do the job and depending on the amount of things this could be cheaper.
Thanks again for everyone involved.
Cheers




