My customers are vehicle lease brokers - they contact other businesses and provide them with business vehicles on a lease basis.
The lease “Offers” are provided to them by various “Finance Providers”.
These “Offers” are provided in a CSV format (causing me much pain and frustration).
An “Offer” is effectively a combination of vehicle + lease terms + price. For example:
Offer A: 2024 Audi Q6 Sportback Black Edition 65kwh 4wd - 1 month + 36 months - £650/month (Massively simplified).
Each vehicle has it’s own “CAP ID” - it’s incredibly granular and it basically covers every possible combination of vehicle you can imagine. Every vehicle (at least in the UK) has a CAP ID and you can tell EXACTLY what make, model, variant a vehicle is, by looking up the CAP ID.
Here is the challenge:
It is likely that multiple “Finance Providers” will provide “Offers” containing vehicles with the same “CAP ID” but differing pricing/leasing options etc.
Customer uploads a file containing an offer which already exists in the databases (Meaning it comes from the same Finance Provider and has the same CAP ID). Upload must overwrite the existing offer, only when the CAP ID and Finance Provider match.
I’m using the T1 Uploader plugin and running it through a backend workflow as suggested, but struggling to progress from there - whichever way I try it, I end up with either another record or everything being deleted.
Hey @jamie.robson.89 , could you outline the backend workflow process you’ve got through so far?
On the surface, you could send the CSV data to a backend loop, and then check each item against your records (do a search for CAP ID + FInance Provider match), and then create/replace as needed.
Do you have any experience with coding? I find that when dealing with situations like these, sticking to vanilla bubble overcomplicates things. Making a server side action which processes the csv and returns JSON or the variables directly is probably your best bet
Hey @jamie.robson.89 - so the problem with your current workflow is that in step 2, scheduled workflows run ‘asyncronously’, meaning that they will be triggered, and then progress on their own outside of the workflow. So, your telling bubble to create the things from the CSV, but simultaneosly telling it to delete all the offers (the Search for offers:filtered won’t be able to reference changes made by the bulk upload)
For things like this, and as @jonah.deleseleuc has pointed out, I’d recommend using the 1T CSV uploader to fire off the JSON from the CSV’s to an endpoint in your app, then depending on your use case, either run a series of workflows based on this list (if no duplication/error handling required - again asynch) ‘option 1’ or running a loop ‘option 2’ (if duplication/other checks need to be made on the data synchronously). This can all be done with vanilla Bubble + 1T csv uploader efficiently (depending on the scale)
Hey @jamie.robson.89 , with that number of columns - here’s a high level example using scheduled workflows on a list for an update flow:
1 - the 1TCSV uploader will give you a list of JSON from your data, something like this:
[
{
“id”: 1,
“first_name”: “John”,
“last_name”: “Doe”,
“email”: “john.doe@example.com”,
“ip_address”: “192.168.1.1”
},
{
“id”: 2,
“first_name”: “Jane”,
“last_name”: “Smith”,
“email”: “jane.smith@example.com”,
“ip_address”: “192.168.1.2”
}
]
2 - Create an endpoint in your app, and get it ready for initialization:
5 - on the original endpoint, fire the info that’s received to this Backend workflow (scheduled on list) - the request data is a list of JSON objects (from the 1TCSV)
@DjackLowCode Happy new year - thanks so much for this, and sorry for the late reply.
I haven’t fully tested yet but I’ve been able to at least start passing the information through , I think I can work with it from there.
One thing I’m stuck on though - I need to pass through the “customer” that’s doing the upload, so that I can constrain the search in the backend workflow.
For some context, it is possible (and likely) that one of my customers (a company), will be uploading a file that contains the same records as another (they use generic “ratebooks” which contain the same offers a lot of the time).
I tried using headers but it threw everything out, so figured I’d just ask before I mess it all up and have to start over.
You could also add some custom parameters to the upload so that you don’t need to add the “customer” ID to the 100s or 1000s of rows your uploading (not that it really matters at that scale).
I’ve done this before to match the customer, company etc. or any other data that needs to be consistent but not explicity added with the CSV - it’s a great plugin.
What’s even more weird, is that the formatting is the same for other fields on CSV and they are being retained - the data types are correct and the privacy rules are set up correctly.
Clearly something is happening between the JSON being passed from the front end to the first API endpoint call in the backend, but I can’t for the life of me figure out what it is.
Yeah I added a debug entry after the “CSVUploader generates file” with the JSON texts and all the info is in there (checked in the backend logs as well)
Managed to get around this, but I’m wrestling with the next part which is deleting a list of offers.
Example:
In month 1, I upload a CSV that contains 100 offers. In month 3, I upload a CSV that contains 90 offers. The 10 offers that do not appear, should be deleted.
An offer is determined to be unique based on a few critiera:
CAP ID (Number)
Annual Mileage (Option set)
Initial Term (Option set)
Contract Length (Option set)
Finance Provider (Option set)
Parent Customer (Data type)
Admittedly, these back end workflows and running on lists spins my head a bit, so grateful for any help on this!