Bubble doesnāt handle JSON particularly well, though I have had a few instances where it was necessary to store the whole response as you are trying to do above.
You can create a field of type āwhatever your response datatype is calledā in your db then drop the entire response in there, bubble will recognise the structure but bear in mind the huge caveat that you cannot run ādo a search forā on a JSON object or its nested data. This means that you end up needing to use a lot of filtering for complex db requests and it can get very slow/messy.
The other option is to save the entire responseās raw text into a field, then use regex to extract what you need, though again this is pretty messy.
In your case however, since the JSON structure looks pretty simple, I would suggest using a bulk create action via bubbleās data API, and restructuring your db to expect a single object per āthingā.
This depend of what you need to do with the data. But in most case, you will use a backend WF (recursive or scheduled on a list) to process each item in the list separately.
@zzsnowballzz This is your answer basically, though my preference is to use the bulk API to create 100 entries or so at a time.
If you make a bulk create API call, you can pretty much just pass the response straight to bubbleās db, the only change required would be formatting as text and adding newline as the delimiter (as opposed to a comma). Then simply schedule again only when your list āfrom entry 101ā is more than 0.
Thereās a solid case for scheduling on a list now however, since bubble have just launched some upgraded functionality for it, and it is significantly cheaper in WF units now. The downside there being that you have no control over each workflow run and donāt know when the job has completed.
Absolutely. Bulk api is also an option. @zzsnowballzz if you search forum, you will find many example of each of them, including bulk import using API.
The above error (if you ignore all the fun encoded quotes) is telling you to only create 1000 entries at a time via the bulk creation API. As mentioned above I normally run on 100 entries (sometimes up to 250) depending on other external APIās rate limits etc.
Try reducing the creation volume in each instance of the workflow, then iterate over the list in bitesize chunks recursively until the list is empty.
I have āsridā. It is unique ID. So when I parse json >1 times on the same date, I got alot duplicated rows in db. I really want data to be saved only with a unique srid.
If you just make sure you only start from the next entry that would normally suffice. If you want to be extra safe, add a constraint to the search for your format as text operator that looks for existing entries in the db