Running our own GPU servers for app?

Hello fellow Bubble developers,

We are currently using 3rd party API at, to run our app. This API lets us finetune AI models and generate images for them as well.

However, at this point, we would like to make the move to our own GPU’s to reduce cost, but to also provide better UX to our users (we would get more freedom in how we operate with our own GPU’s rather than relying on astria’s API and servers for storage.)

However, not sure where to start for this or how to even implement this into Bubble.

Appreciate any insights or advice.

Hi, :hugs:

Bubble doesn’t provide any native method related to GPU, but you can opt for the easiest one, which would be to create a Flask server and call that Flask via URL in the API Connector (which is what I do). This Flask would be responsible for all the management of your AI.

Honestly, there are even easier methods. Take a look at COG (GitHub - replicate/cog: Containers for machine learning), created by the Replicate team, which allows packaging AI models into a Docker container and using them in production right away. You can easily use a Docker container, open it for public connection, and call it via API Connector in Bubble - it’s something I’ve already tested and it works as well.

There’s yet another method called Grog (GitHub - multimodalart/grog: Gradio UI for a Cog API), launched this year. It creates an interactive AI graphical interface from a COG image, meaning you can import an AI from Replicate and Grog will create an interactive UI with Gradio. It’s worth noting that Gradio, by default, allows connecting its interfaces with APIs, making it even easier.


1 Like

Hey Arthur, Thanks for the reply.

So based on what you’re saying, we can use COG or GROG to store our custom AI finetuned models, but then we would still need to build out an API to call for the GPU’s to activate for model finetuning or image generation. And since Bubble doesnt offer any native GPU method, we would call our own API using API connector?

It may or not be relevant but the API Connector won’t send requests to servers with self-signed certificates, I ran into this a while back and had to make a plugin with actions that would ignore it. Not a big deal if you can just do http, but sounds like you have some other hurdles first.


Exactly that. I’m not sure which AIs you want to self-host, whether it’s Stable Diffusion or any other kind, but you obviously need to have the necessary computing power for the AI to function (i.e., a powerful GPU) and tweak some things in the UI Gradio it generates (if you use Grog) to prevent abuses, such as adjusting the maximum number of users that can run simultaneously, etc.

It’s going to be a bit challenging (it took me a few days to figure everything out and deal with errors), but with persistence, you can reach infinity. I’m sure that ceasing to depend on foreign companies to run AI is a big step for your app.

Regarding the API issue, Grog itself already creates an API; you can view it at the end of your generated interactive UI.

1 Like

Appreciate the insights. This makes much more sense.

We do in fact plan on using stable diffusion. For the finetuning the user trains models on our own custom model. Then do image generation on the users trained models.

Im somewhat confused though on how we would implement the GPU part. We would likely go with a cloud GPU on demand solution for this, but how would this play into everything.

I’ll use Grog to store the models/images, and use the API generate from Grog to call upon models/images; but then how do I get Grog to call upon the GPU to begin actually doing these tasks?

Thanks again btw, this is really helpful!

As soon as Grog creates the interface on your computer (locally), it will start using GPU resources. So, you need to open your network publicly and begin receiving API calls. In other words, each time a request is successful, it will utilize the processing power of your GPU.

Actually, that’s what I think. I use cloud and Hugging Face, I’ve never hosted on my own computer. There are tutorials on Google about this and probably solutions that allow connecting GPUs to it. Unfortunately, I can’t help you with that, but if you need any additional help connecting Grog with Bubble, I’m available :wink:

1 Like

So is there not a way to do it on a server? Because it’s not ideal if we are running it locally on our GPU’s.

I think you are making it too complicated. Plenty of folks allow you to fine-tune SD1.5 via API. Here’s an example: 🐣 Create and train your first model

1 Like

No I don’t think you understand, especially with scenario that you linked. 5$ per training is already double what we are paying at The whole point of this is to reduce our operating costs by running it ourselves and so that we can deliver a better end product by offering more customizability rather being locked into what their API only offers.

Ok, this isn’t really a bubble thing, you will need to build a server. All the code I think you need is open source and I believe already containerized so shouldn’t be too hard to implement.

HOWEVER, This is only going to be cost-effective if you have lots of concurrent users with consistently heavy loads. I would definitely do a cost analysis before going and building anything.

1 Like

I see what you’re talking about. We were planning on going the route of on demand cloud GPU rental. Would this not be cost effective even if we didn’t have mass concurrent users? Or are you saying the cost of server upkeep and such would still play a massive part in the expenses regardless of active GPU usage or not (for model training/image gen).