Sergey, I really appreciate the offer! I may take you up on it but probably only when I’m in a position to pay you for the work. (since this is what you do at mintflow)
We do not charge for something that doesn’t take long to explain.
Also, I think it will be helpful for others as well if you can post it here.
I’ll attach some thoughts here these days.
@lottemint.md Constructive feedback is always appreciated! Our team worked hard to review and create these for the community, so if you or anyone else spots alternative data structures or ideas for building these apps, we will absolutely work to incorporate them. These tutorials are meant to be a community resource, so we’re always down to credit folks who give strong community contributions!
Ok Sergey, bear with me, we’re going back several weeks and a lot has changed since then (and my memory is imperfect for sure).
The basics: building a personal task app – mostly for my wife.
At the time, I had 3 basic Things:
- Task Lists
(screen shots of those data things to follow below)
I was working on displaying a User’s task lists and their tasks, so I had a horizontal repeating group that contained the user’s task lists. (I don’t have any screenshots of the page at that time)
Within each cell of the HRG was/is a vertical RG displaying that list’s tasks … pretty simple right?
At the time, I thought, I don’t need to have a mutual relationship between users and tasks (or task lists for that matter) I can just look up the relevant task list by the
owner field on task lists (or on tasks) and look up task list affiliation by the
task list field on the task.
This is a current screenshot, but the result at the time was very similar:
However, even with a very small number of task lists (let’s guess it was around 5 at the time) and a small number of tasks (<50) the nested RG took 2-5 seconds to return and render. Now I’m on a free plan so it’s not like I’m expecting blazing speed but this was s-l-o-w. Keep in mind, other than filtering for the Current User, I wasn’t doing any other filtering on the RGs yet so I had to find a different solution. So I did some digging on the forums and a lot of folks recommended keeping lists of the user’s things on the user object so that you weren’t traversing the whole table to get just a few results. Tried that out and it does indeed return faster so I switched wholesale to that model.
Data Model Screen Shots:
(red bars are striking out things/fields that did not exist at the time)
Task List Thing
That’s what Josh was also explaining in of the numerous performance Q&As.
So yes, I agree with @lottemint.md that it’s best to suggest a more scalable data model for the beginners.
It is like a series of sieves with increasingly narrower mesh.
Move as many constraints as “high” (left … if that makes sense!) as possible in your search.
If you can move something from a filter to a search constraint … do it. Anything you can do on the main search constraint to reduce the load coming into your filter is a good thing. It is “cheap” searching.
If you have to do a filter (or worse an advanced filter) on a large set then think about the size of the thing you are pulling back.
I have to jump in again. @lottemint.md please consider this mode of thinking, and let me know your opinion:
In your original post you have a data table: Pin
And an item called Details - which references an item in another table called PinDetails
Here’s what I think.
If you Do a search for Pins you will also load PinDetails
If you look in the developer console (like the screenshot you provided) you will likely see something like
Which seems to me. That If you do a search for pins, you are also loading Pin details. Which in my opinion; may not have as much of a database advantage as what has been suggested here.
I agree 100% that we should set up different data tables. Its good practice for general app maintenance; and it also would optimize workflows in some instances (by removing “code”/load).
But I am still convinced, that if you Do a search for Pins you are also going to be loading PinsDetails (regardless of it hiding in a different data table)…
And @vladlarin - I agree Josh is probably the best resource on performance. In the link you have provided though: he says
The more data a search fetches, the longer it takes. That probably sounds pretty basic, but we routinely see pages that fetch thousands of data items on page load
I realize there is likely more depth and context here. But as a general principle; what Josh is saying here, kind of conflicts with the idea of a user in @lottemint.md pinterest example, searching through 10 million pins to find 5 saved pins…
So even if you Contraint the search, so tightly that the servers themselves are gasping for air. It still seems like a weirdly intensive query…
Sorry to continue this thread guys. But I have to admit I am not totally satisfied. I think some type of concrete answer, has yet to surface.
Do people at least understand what I am trying to say?
Yes, we “tested” this some time ago (load a page, then drop the internet connection) and it does seem to behave as you suggest. That more than just the thing you loaded is loaded…
Searching trough 10M records is not the same as fetching 10M records.
Do a search request is executed server-side to my best understanding, so what’s actually fetched to a page in this scenario is 5 pins only (and they’re lightweight, as suggested by TS).
So there’s no major contradiction here from what I can tell.
Ya that’s fair. Maybe that’s right.
I’m only keeping this thread going because I myself don’t know the answer.
I’m just wondering though. If you have a page that only displays 5 pins, and you are fetching them from a database of 10M pins. Sorting them by something like created date seems like it would be naturally fairly fast.
But if you have a constraint pin = saved by current user.
Wouldn’t the database have to first look through all 10M records to ensure that it actually is displaying the current users saved pins, and not some other users?
That’s my thinking. Like I say, I don’t know how it works. I’m not trying to win any arguments here
One thought would be:
If a data table Savedpin
had a field called User - with data type = number (1, 2, 3 etc)
Then you could Do A search for Savedpin where user = current users number
and then sort the search by number.
So presumably the database would search all the records, starting at user = 1, then user = 2 etc etc. That in theory could be a fast search.
The only problem with that is that the pins might be displayed on the page randomly, instead of by saved date or something (due to the search being done, based on number).
Anyway I’m just thinking out loud here at this point
Probably not, as we know that Bubble (and most relational databases) have indexes. We are told that Bubble builds Indexes in the background based upon access.
So in this case there may well be an Index that is sorted by User. So all you need to do is find the first and last entries in the Index and you have all the underlying records for the User.
Well said Nigel! That makes sense.
This forum thread brings up a common topic, so I decided to write an entry in our Bubble Manual about it: https://manual.bubble.io/working-with-data/connecting-types-with-each-other
Please let me know if there’s any feedback!
Reading through your update to the Manual, and it says that lists max out at 1000 entries… from everything I’ve seen on the forum - I thought that this had been increased to 10,000 entries a long time ago? Has this changed again?
Or am I mixing things up, and perhaps the 10,000 entries max applies to something else?
You’re right, my mistake! Updated that sentence in the Manual to say 10,000
@shu.teopengco Gitbook (where we host the Manual) was down earlier today, but it appears to be back up for me.
I can open it now. Question here. If I choose option 1 or 2, how would I migrate to option 3 later if I have a lot in my database?
In the past, I’ve done these migrations by creating a hidden page that’s not linked anywhere and putting a button on it. The button grabs a bunch of unmigrated records and schedules an API workflow on that list. The API workflow processes each entry however needed for the migration.
Hope that helps!