How to import text file as database things?

Hi all, I have a .txt file with a list of paths like this:

Is there a way to import this as database things in bubble? I’m also looking for a way to do this automatically whenever I update the text file in my S3 bucket.

Just an idea:

  1. If the text file is deposited in the S3 bucket using a consistent name then you can look for this name periodically using a BackEnd scheduled task, or trigger a the task from a front end event. You can process it then simply delete it (last step). When the task runs (automatically or inadvertently manually triggered) again it will not find the file so no harm done.

  2. To process the file understanding the structure of the file specifically the line delimiter is the key. From the image it appears that the format of each line is identical, with the exception of the first line. First it is important to know if the new line delimiter is simply a ASCII value of 10 (Line Feed), or if it is a 13 (Carrage Return), or if it is both (CR, LF). Alternatively you could load this file as if it is a CSV file (I think I have seen in Bubble somewhere) knowing that it will be read as one long string since it lacks commas or tab characters. That said, I can not recall what a CSV input typically uses as a end of line delimiter, some experimentation may be required.

This appears to be the output of a directory list command possibility piped to a file, which I would expect only has a LF character. If the CSV input does not work: Once you know the delimiters of each line then you can simply split it and make it a list, and toss out the first item which is not an actual file reference. Then, you can then parse out the Date, File Size, and File Name in each list element using a backend recursive loop, and stored the fields in your database table.

If other have a better solution, I love to learn.

Thanks John, I followed your steps and almost got it! I have data coming into the database but not sure how to correctly do the split:

I’ve tried with
pressing return in the text box
but neither seem to be doing anything. What do I need to type there to get it to split?

\n (LF=10) may not be the line delimiter, and I do not know if a carriage return (CR=13) in the Text separator box actually puts a value in or ignores it.

I would suggest doing a test to determine if you can figure out the actual delimiter. You could set up a test that counts the delimiters using regex [\r\n] which would look for any instance of a CR or LF or CRLF or LFCR. Then you can play with the regex parameters to narrow in. Once you know the pattern you can set up the split.

Note the split on the lines is only the first step, you still have to split the fields. To do this first replace all " " (two spaces) with one space, this normalizes the spaces between the fields. Then you can use the space character as the delimiter for the second split.

Last thought: (If you have not already done this) If it was me I would put the data into a variable first rather than trying to put it right into the DB. At least until you have the algorithm down pat. The reason is then you could create separate steps for each change and be able to verify the result using the debugger. Once you know how to construct the entire algorithm you can merge the pieces - and move this to the backend.


Hey! this is very interesting, parsing the data makes sense to me, however, I dont know how to access the text file in s3 to parse the actual data.

nevermind, was able to access it via backend workflow using encode with basecode 64