Currently, I’ve got about 15 data types (I’m using “tables” and “data types” to refer to the same thing here). I have different data types (tables) for donors, the items they’ve donated, donor history, contact history, etc.
Due to reporting issues out of Bubble, and the need for third-party integrations on a transactional level, I’m wondering about the ease of reporting from normalized data tables (separated based on content) vs a monster table (all columns/fields in one table).
From a hierarchy standpoint, data cleanliness, and database efficiency, I understand that normalization is the best practice, but I’m wondering if it’ll cut down on the number of transactions, and merging required to produce an output for my end users.
I have my data mostly normalized.
I found this gives me the most flexibility in bubble when I want to do something that I didn’t anticipate with the data.
But I don’t think there’s anything wrong with building a giant table either - it just depends on what you are trying to build and the flexibility you want to have for future changes…
The issue I’m facing is that when an end user wants a report that combines two data types, they end up with fields that refer to unique IDs, or they have to do two exports and recombine them.
Ah - I see good to know about the CSV limitations.
I’m sorry there doesn’t seem to be any workarounds.
Best we can do is put it in as a feature request.
I’m working through a quick hack of this using Google Sheets + Zapier, but I feel like this is the type of thing that should be or is on the list of things to be accomplished natively with some “select fields for export” concept.
In the interim, the basic idea would be to combine specific data onto a sheet, and have that sheet attached to an email that is triggered to be sent upon the creation of a new spreadsheet. Then the user would receive a “Download Report” Email – so you don’t get the perfect autodownload UX, but still very acceptable UX I think.