Import files from AWS S3 bucket

Hi,

I am trying to import the files (csv) that are in my S3 bucket into my Bubble Database using “upload as CSV” action in a backend workflow. However, I can’t find a way to download the files. I tried using AWS Dropzone plugin without success.

Is there a way to do that ?

Thanks a lot for your help

I am unsure what you are trying to achieve here.

Do you want to import the files that are located in a remote AWS S3 Bucket into your Bubble app’s storage?

I don’t necessary want to store the file in Bubble DB, as they are CSVs I would like to upload the data directly in my table. But if not possible, maybe i could import the file into a table A and then upload data as CSV from CSV file in table A to table B.

I think I get it.
You have stored CSV files in AWS S3 and you want to store the data in those CSV files in your app database?

1 Like

Exactly :slight_smile:

Perhaps you ran into the same issue as on this post?

Yeah I saw it, but I can’t use this technique as I deal with sensitive datas, the links to csv files are not public.

this is a standard “export - reformat - import” task of the following steps:

  • get local file system handle ( export CSV off of AWS to local fs ) → AWS CLI download
  • any spreadsheet util ( load spreadsheet from CSV data in local fs )
  • local spreadsheet to bubble DB using bubbles normal utility

manually do above steps for single S3 cvs file

then script it to automate the loading of a bunch of files

Thanks for your reply but would it work in backend workflows ?

no ideas here on a bubble-tools-only solution.

i thought that the mention of ““my S3 bucket”” in the OP meant that already, you were using non-bubble ( whiteLabel AWS bucket and NOT the bubble-issue S3 service ) elements already.

Do you reqmt’s include

  • migration of legacy repository from S3 to the bubble.DB
  • support for ongoing transactions where the end state should include the CSV data resident in the DB rather than in the old S3 bucket.

If you are migrating legacy repo, it may not make sense to limit your solution set to “” just bubble tools “” .

If your main concern is ongoing transactions where you both -
- must work in bubble
- must access CSV’s resident in S3

there are just-bubble techniques ( not an expert )
bubble has some feature gaps that beg eval of non-bubbl tools where EZ support of

  get a stream ref
  pipe the stream of bytes to another process or thread 
  get a readStream as input 
  convert CSV input to row & column output inserted to DB

Doing above is gonna require some non-bubble implementation ( an API with callbacks made from bubble hook to outside )

1 Like

This topic was automatically closed after 70 days. New replies are no longer allowed.