As I mentioned, that doesn’t guarantee uniqueness, and often leads to duplicates (due to race conditions)
It seems that when items are created concurrently (even at the exact same millisecond) database trigger events are queued, so they run one at a time.
Getting the ‘Count’ is faster than reading a field value - which explains why, so far, I’ve been unable to get this method to create a single duplicate (whereas reading a field’s value creates duplicates very easily).
My guess would be that, as the DB increases in size, it will take longer and longer to retrieve the ‘count’ - so probably at some point this method will fail due to race conditions, but I haven’t tested it at scale yet.
UPDATE: in further testing I did run into some duplicates using this method - but only 2 out of over 11k items created - so, for most purposes, it’s probably ok, as s simple solution (but it’s NOT immune to race-conditions).
1 Like
In real-world scenario, there is some sort of queue that consumes and writes behind the scenes.
To have this in bubble, you can write to a queue table everything that you want to write to the main table and have a consumer that consumes only 1 thing from the queue table → creates the main thing and deletes the old one from the queue by reference.
You are guaranteed to have sequential and unique IDs during generation because your consumer will only generate 1 per turn and it will loop itself until it is unique too. The consumer is user agnostic and always processes 1 thing, regardless of the queue size.
cons: it will likely be slow for large volumes
1 Like
Thank you for digging into this.
Had a setup like this before. Works well enough if you’re reading and writing every so often, it starts to fail when traffic gets heavy. I was left with so many dead records in the queue table and records that “skipped the queue”.
Also, that adds quite a bit of WU overhead to manage and loops to keep in check.