I have been trying to connect to a postgres instance on AWS RDS using the db connector. I can successfully connect and configure the database via pgadmin3 and i’ve populated the database via scripts using psycopg2.
Every time I attempt to test the connection to the database using the db connector I get a connection time out error:
Connection issue Connection attempt failed: Current fiber timed out after 65000 ms
I attempted to use the following work around with no success:
Does anyone know of any settings on the database side that may need to be configured in order for the db connector to connect to an aws rds database?
I figured out that in the ES2 security groups you have to use both the 0.0.0.0/0 and ::/0 CIPR (IPv4 and IPv6 respectively and basically means any IP from either version) in a new security group. You need to add those to both outbound and inbound requests if you intent to both read and write data. Additionally, make sure you select the DB as the type of connection and it will set the port automatically for you then follow the template that is in SQL connector plugin.
i.e. mysql://username:password@your_db_instance.aws_endpoint_path:port_number/db_name
WARNING: THIS IS NOT SECURE
As I stated above those CIPRs give access to anyone but it looks like unless you upgrade to the enterprise plan you can’t rely on bubble pinging AWS from a dedicated or even a dedicated range of IPs. So the best I could do was to set the security groups CIPR to those options above. If you are using this for production purposes be sure to make your user name and password very hard to crack but again, I simply don’t advise this approach for anything more public than dev and testing. It’s just low hanging fruit for the dirty little hackers out there and with a simple hacking tool it wouldn’t take too long to (10 ms) to figure out your password and username.
For example, we are using this approach in our staging/development environments but we are creating our own API driven hosted DB cluster, using NODEJS, MONGODB, EXPRESSJS and a few other DBs for heavy math and tying them together with GraphQL’s slick API endpoints. We are additionally writing our API connections in a private plugin for our production env. and adding private headers, JWTs etc for more controlled connections to the DB via the API. Anyway, that’s a lot of programming for the codeless Bubble platform but funny enough, even though we are all coders we have found a very interesting niche for bubble to fill as a central part in our ecosystem.
I’m also sure that AWS has a better way to lock it down through a different security protocol but if you just need to get it up and going and your not too worried about security then do the first example and start testing away.