Streaming API with Backend Workflows

Hi everyone,

I’m creating an AI chat agent in my bubble app and I’d like to make use of the streaming text functionality for a better UX. The problem is I have a lot of sensitive dynamic data in the system prompt and in data my bubble app returns to AI function calls. I’d like to not expose any of this to the client.

Because of this, I figured I’d have to do everything in backend workflows, but as I understand there’s no way to get the text stream from the backend to the frontend. Is there a workaround or some other solution?

Here’s the post you want: How To Build AI Agents with Multi-Step Tool Calling in Bubble 🤖 🛠️

Yes downside is you can’t have streaming that way. But, meh. Have a good loading UX.

If you leave your prompt inside the body as private, or even part of it, wouldn’t it stop working?

The option that @NexradJosh shared, proposed by @georgecollier, works incredibly well too.

But it depends — if streaming is essential for you, it won’t be the best option…

I’ve had a very similar problem to yours before. The way I solved it was by sending my streaming to a third party, in this case, n8n, I handled everything there, then ran the AI Agent with streaming, and it returned to Bubble through the API Connector…

It worked very well.

Thanks everyone, so it sounds like you cannot get the streaming api to work with backend workflows and my options are:

  1. Build a good loading UX
  2. Use a 3rd party service to act as a middle man between bubble and gen AI.

Is that correct @georgecollier and @carlovsk.edits? If anyone has any other suggestions I’m happy to hear it.

The first one is absolutely correct, whether it’s streaming or not…

The second one will depend on whether you really need text streaming, with the conditions you set, the prompt and everything… You could use a third-party service.

If streaming isn’t essential, you could build everything in Bubble.