Hi everyone.
I have a table with 1.2 million entries in it. It was originally designed as a method of tracking events, and it outlived it usefulness, however I never disabled the record creation, and now I look it at there are lots of records, like 1.2 million of them
Any operation via admin (e.g. bulk operation) or via the front end seems to result in a timeout.
Even an export to csv timesout.
There are plenty of records that can be pruned, and can be done with a very simple “contains” selector, and Id expect the table to shrink by around 98%, but any operation I try on it, just results in some variation of a timeout message.
Any advice?
Have you tried a recursive workflow? Granted, that is 1.3 million workflow worth of WUs and each workflow will require a search…
Bubble said they’re working on better bulk data manipulation but I wouldn’t hold your breath.
Sounds like you’re on a legacy plan using Capacity since you’re getting timeouts?
If so you can slowly clean them up in chunks via recursive workflow scheduled/spaced apart a bit.
Or jump ship and make a new datatype for the ones you want to keep. Move all the logic and data over you want to keep and delete the old table