[New feature] Native support for API streaming

Hey @rutvij.bhise (and @fede.bubble) - you mentioned that streamed responses should now work in backend workflows. How do we actually set that up? My understanding is we need the “Display data” action in order to be able to show the real-time streamed responses. Thanks!

I’ve started swapping my ‘polling for status’ (openai) workflows with stream response, in my BE workflows. Despite the documentation stating Bubble will not always ‘wait’ during a BE workflow, it does wait for a stream response. I now have a simplified process of sending a message, streaming the response, and then using that in the next step. I do not ‘stream’ the text anywhere into the UI (like chat) I just the outcome for a future action, or to break into JSON keys.

Has anyone managed to use Responses API to send the conversation history as part of the stream? This is to ensure it’s a conversation and not just a series of disconnected messages.

My API Connector settings are to send as body type JSON:

{“model”: “gpt-4.1-mini”,“stream”: true,“input”: <input> }

In the API Connector I can set value to be:

[{“role”: “user”,“content”: [{ “type”: “text”, “text”: “hello” }]},{“role”: “assistant”,“content”: [{ “type”: “text”, “text”: “hi there” }]}]

And this all works just fine, in the API connector. If I put the exact same text in the workflow action’s value, then stream a response to a group (to show the text), it fails. I understand one big chunk of JSON as a value isn’t great, but it can be created safely and submitted. I am just surprised the same value goes from functional to failure - when moving between API Connector and a workflow action. My testing has come to the conclusion that Bubble’s adding something that breaks the JSON from a workflow action (like double quotes?).

Has anyone successfully got openai’s Responses API streaming conversations, with the history included?

1 Like

Running into the same problem. I get an error every time. Have followed the tutorial exactly. Did you ever fix this?

Probably because custom events are await and don’t expose the result until they are complete. Who would think of probably the 1 of two use cases for streaming…

I’m not ever able to get any of the text stream statuses to change. “Is Waiting”, “Is Streaming”, and “Is Done” are always no. I’ve randomly gotten the text stream to actually work, but 99% of the time it stays. Static. No matter what I do the entire response appears at once instead of streaming.

Here’s my API setup:

Here’s the initialization pushing to “text stream”:

Here’s how I’ve configured it on the front-end:

These 4 text elements (Waiting Icon, Text Stream, Session Message, and Status) used to be in a single text element but I split it to test out whether the issue had something to do with conditionals on an element messing with the streaming state…which it does not as far as I can tell.

  • “Waiting Icon” is hidden on page load and shown only when the message’s final content is empty, the message type is assistant, and “text stream so far” is empty.
  • “Text Stream” visible on page load and set to hide itself when the session message’s content is not empty (final written value to the database thing).
  • “Session Message” is the final state of the message’s content. It’s hidden on page load and conditionally shown when content is not empty.
  • “Status” is just a temporary element to display the state (Is Waiting, Is Streaming, and Is Done). I’ll delete it later.

Just a note for the ‘streamers’ to say I’ve moved nearly all my OpenAI calls to Cloudflare workers. The result: faster response and fewer issues.

I wish I hadn’t had to do this, as it adds a little more complexity, but I had some streaming responses take 5-10 seconds to start the stream, sometimes they timed out and bigger (such as image) requests occasionally just rejected. So far, my CF results show all these issues are gone. Plus with CF I get more options to chain events, stream larger tool operation without timeouts and cache data in a more controlled form. I can do 100k requests daily without charge (thanks, big tech :slight_smile:).

1 Like

@rutvij.bhise Any updates on when this might support for streaming messages in custom events?

1 Like

Hi there - we already support streaming in custom events. Server-side custom events still have to be completed before continuing onto subsequent actions, just like any other server-side actions.

Not true. Had a streaming workflow working just fine. Moved it to custom event and never worked.

OK. Would you mind please creating a support ticket so the team can look into your specific case? Thank you!

And if you have already, please DM me so I can take a look. Thanks!

This feature came out 9 months ago and a bunch of people have reported the bug that conditionals based on streaming text states don’t work and it still has not been fixed. Honestly pathetic.

1 Like

I’m using OpenAI’s Responses API with streaming and hitting a critical issue on the second API call.

Setup:

  • Using the Responses API’s store: true feature to maintain conversation history

  • Capturing the response id from the JSON (text field) to pass as previous_id in the next call

  • Also capturing the streaming choices:first item:delta:content (stream field)

  • Displaying the stream in a Group using “Display data in group”

The Problem:

Call 1: Works perfectly - stream displays, ID captured successfully

Call 2: When I make the second call with previous_id, I get: “The data in this group was changed by an action. The value of the group’s data (above) may differ from the Data Source property below”

The stream doesn’t display, even though the API call completes successfully.

Call 3: Doesn’t respond at all

Question:

How do you reuse the same Group element for multiple streaming API calls?

Is there a specific way to reset or reinitialize the streaming connection between calls?

Has anyone successfully implemented multi-turn streaming conversations in Bubble?

Thanks for any guidance!