I am accessing an API call that currently comes back with 1.2 MB of data and 50k records roughly. The backend development team wants me to save that data on the application side and then when users need to filter through it, I apply filters to that dataset and show them the results.
Given the size and complexity of the returned data ( there are a number of arrays inside this return ), I do not want to store it in the bubble database. I have run into issues with that in the past when dealing with larger datasets, I don’t even think that would be possible.
Is there a way I can cache that API call somehow on the backend / at the global application level then let the client side filter the result set as needed? I am not sure where I would store the API call in that case. What are my options here?
I’m not sure it will fit because there’s a limit on a field, but when you create API data call, Bubble will create a “type” of this call. you can create 1 item in a DB and set a field to this API Type. Store the data using Get data from API Connector 1 time and access it. You will be able to filter it in frontside with :filtered function. However, you may face reach the limit for the field size.
Another solution is to create a file from the JSON and use a JSON plugin to read it and maybe filter them too (I don’t know which one can do that…)
That sounds like a really good approach for this particular scenario! Fingers crossed I don’t hit limits. The response won’t change very often, possibly once every year in some cases.
I am going to look into this if the first plan doesn’t work.
This is their solution because we are displaying this data in a table but the specs call for screener type filters on the page, such as range sliders and text search. So at the top of this return they give me the filter parameters, such as the range of high/low values for each slider. The only trick is then I have a return of 50k records below that to filter through. I almost think it could be broken out into 2 API’s, one to load my filter ranges and the other to load data. There is a lot of calculations on the backend to prep that API data and it the specs have been changing often, so I figure they are just trying to keep from duplicating code to maintain.
I may just load that full package in the page load if I have to…