Does anyone have recommendations on how to create new items for large lists of things (each with 1 field) quickly (Even under 30 min would workš) ?
Weāve tried both the recursive route and api workflow on a list but the list sizes are 8-12k and users will be doing this a couple (2-5) times per day.
Backend workflow with even a 1 second delay (not scaleable) would have a 10k list expected to take 166 minutes or almost 3 hrs to create 10k single field text records.
Weāve tried to split it into a 4 bucket setup where 25% of the total list went to each bucket to in theory cut the time down by 4x but we destroyed our professional plan server within seconds of the workflow starting.
I wonder if the data API might be worth a try - up to 1000 items per request. Iāve never had a need to do anything like that, so let me know if/how it works out.
Iād like to know how also. Given its just one field, could you try an alternative approach of just storing the list as a single delimited text field CSV or JSON? Maybe not, but worth thinking about. I do it with āscoresā eg
1,2,3,4,5,1,2,3,4,5,
Hmmm that would work, I guess I didnāt forget to clarify itās the 1 text field but also linked to a user so 2 fields total. But in theory may be able to get something like that to work.
A field on the User āmy list delimitedā containing
red|orange|green|blue|yellow|red ā¦
In my app I have 1000ās of users, all of which have 100ās of scores (which could mean 100,000 DB objects) so instead I have a string of hundreds of numbers on the User. (more or less)
And then I user regex to edit the string to change individual scores - Itās kind of mental but I couldnāt find a better way.
Nowadays Iād probably store it as JSON and use JS functions rather than regex.
Hmmm will try this, for flexibility in future would prefer individual data type incase we add more fields but thatās 100% an option if we never expand fields
Yeah, good idea. A large delimited string might work, depending on the requirements. Itās especially easy with the recently added :split by operator. Iāve used a similar approach as a convenience when a custom data type would be overkill and/or speed was a priority.
-Steve
EDIT
Just keep in mind that if the string is really large, it will consume more browser resources. A āproperā and more flexible approach would be a custom data type for sure.
Have you considered a different backend for this use case? Today I created at least 600 records in 1 second in google a firestore.
We can make there and bring it back fast!
Learn more
Need to have your database located in your region! Tired of struggling with Bubble or APIās? Need a little functionality thatās not available yet thru current market plugins?
Are you ready to step-it-up or speed it along? Need some custom code or a plugin built? Contact me today to learn how to book a 1-on-1 session, get your plugin built, or yet freelance building support!
This has been by far the fastest route, 1k items taking <60 seconds. Came with a major downside though, list sizes of 100/500/1000 (max per call) seem to always throw an error at the end on every single call, thus stoping any later actions. Also after multiple tests on occasion it fails part way through and doesnāt finish the entire upload. Sense these fail mid way there actually isnāt an error being returned due to it being a system failure. Would be good to note iāve maintained keeping it under 20% capacity on pro plan with this method. Iām actually formating 100s of rows of the data on page using a :findandreplace + dynamic fields so i donāt have to manually set hundreds of items conditionally.
let me know if you want me to dive deeper into findings.
I really appreciate the follow up, Chris. Bummer about the errors, but hopefully, you manage to come up with a way to handle or avoid them. I suspect every use case will be a bit different depending on the amount of data and processing involved. Faster but harsher on capacity seems to be the take-away.
When you say ābring it backā, do you mean it lives in firestore permanently and you retrieve it when needed from there? Or do you mean you integrate it back into the Bubble database later, and if so, how?
Thanks for the question @sudsy . I suppose it really depends on your needs. But as an example,
using a firebase function, i update daily a list of ~1400 objects in firebase (not firestore though i use that for other things). I can create that and then retrieve the list in bubble in seconds.
using a custom built plugin, i get the data to the plugin via the firebase api. in the plugin i create an api call with a defined data type that matches my return data from FireBase. I then create an element and set an input field to be app type, and an exposed state to tbe that app type as a list. in the code i get the list from firebase, and āforEachā the returned data into a list that i publish as the exposed state.
so in my example, it does live there but can be retrieved at anytime. if i needed things to run in the backend, i would create actions instead of elements to get and process the data.
Yeah, for sure. I was thinking if the requirements called for the data to be captured and accessible as quickly as possible, it could be stored in a āfastā external system (such as firebase); but then it could be brought back into Bubble during āoff peakā hours via a capacity-friendly recursive workflow, for example.
Lots of variables to consider, but generally speaking, I like the simplicity and flexibility afforded by having Bubble be the long-term data store.
also a good option! i find that sometimes having it all in 1 system can be really really nice. Sometimes it makes much more sense to split it up or to offload the backend completely but then we get away from the convenience that is Bubble. The nice thing about Bubble is that we have that option IMO