Forum Academy Marketplace Showcase Pricing Features

Dedicated table for searches & data synchronization

I am looking for a sanity check on my database design for searching: I want an app-wide search for items that exist in multiple tables: Users, Posts, and Assignments.

I am thinking of extracting the searchable data (names, post titles, post text, assignment name) and putting that data into a single separate table for easy searching.

The “search” table will have 3 fields:
Content (the stuff I want to search)
Type (what other table it’s in)
ID (unique ID of the thing the content came from)

The main tables will have a search_id field created for each field that’s getting sync’d to the search table, for example if I’m bringing over 2 fields from Users (first name and last name), I will create a firstname_searchid and lastname_searchid field in the Users table to tie the records together for updating purposes. The workflow to add/update the User field would also add/update the Search Content.

This way, the app-wide search can just search the Content field of the Search table. Am I thinking about this the right way or am I missing something that would make this a bad solution?

Would love to hear what you ended up doing here, or if anyone else has thoughts on how to best approach this!

Just my thoughts: I would not suggest doing this.

Your code will always have to do two searches (a), and write updates to related data tables will become more complicated since both tables will have to be manually synchronized for every write to any of the source tables.

(a) You will first have to access the “Searchable” table then do a second search to find the item(s) returned in the “Type” table. Nesting the searches to combine this in one operation would effect performance, and if your search returns a list then you have to use a “is in” condition. One issue with searching fields that are lists is that the underlying database normally can not index them (fast). Hence all searches of tables using lists as target fields are typically done sequentially (slow).

In technical database terms what you are doing is unnormalizing the data which is not really a great idea in this case since Bubble shines as a relational database which is best normalized. Unormalized or denomalized data is best used in cases where the application provide data-mining capability which by your description (if I understand it correctly) is not the case since I believe your goal is to improve search efficiency.

A more optimal approach would be to maintain the normalized form of the data - each type in its respective table but allow the field that you deem to be of most interest to be available to an index API such as Algolia or Typesense. These APIs are lighting fast returning the record references even if you have to merge or intersect searches to achieve the desired search results.

If others have a better idea or different perspective - I love to learn.


John, thanks for the thoughtful reply. This helps me understand the underlying structural issues better. I will look into Algolia and Typesense.

Hello ! I have the same question as @pflanagan884 , but with one more constrain : some of my data is protected by Provacy rules, which means Algolia and Typesense can’t be used ? Am I correct ?