I believe going through 100,000 records is not a difficult job for AWS at all. Or am I wrong?
Case No. 1. 2 tables of 10,000 rows (Parents and Children)
I want to search for Children of a Particulate Parent: I search for a parent in 10,000 rows of Parents table, take found parent and search for what kids this parent has in 10,000 rows of Children table. I search twice in two sets of 10,000 raws each.
Case No. 2. 1 tables of 20,000 rows (Parents and Children put together making People table)
I search for a parent in 20,000 rows of my People table, take found parent and search for what kids this parent has again in 20,000 rows of my People table. I search twice in one sets of 20,000 raws each.
So, technically I go through more searching in Case 2, but on the bigger scale it should not make a difference how many rows I go through (search is a P problem, there is a limit to its difficulty).
Could you help us to establish the truth here?
Does it matter how many tables we have?
What slows down things:
a) is it complexity of a search (i.e. number of relationships between what we search for, such as quick when we search for a direct child record, slower when we search for great-grand child)?
b) quantity of data?
c) width of tables (numbers of fields, where the more fields, the slower the search situation is true)?
Please, could we have some technical clarity on these matters?
Again, I have one table with about 50 fields. If I was to split this table in “Types of Things” I would would end up with 20 tables, 45 fields each.
I trust forum is a nice place but we are talking weeks of changes just because one guesses some method is right while another’s guess is opposite.
Thank you in advance!
Many of more advanced developers may benefit from clarity of what is the optimal way of structuring data.