[New feature] Native support for API streaming

Hey! When I configure the API call in the Bubble API Connector with Data type: Stream, Bubble only detects one chunk and assigns it as a text stream, not a proper list of objects.

What I’ve tried:

  • Streaming via Streaming API Response in Xano
  • Returning a list of JSON objects with correct formatting
  • Manually entering the sample response in Bubble (still parsed as a text stream)
  • Splitting the response in Bubble using :split by (|)
  • Setting the rg type to text, but then I can’t access nested fields like question and response per item

What I want: To display each question/response pair inside a repeating group with proper access to each object’s fields (as you would with a regular API response).

Is there a recommended workaround to turn the streamed string into usable structured data within Bubble?

I’ve been stuck with this for 3 weeks now. I tried this morning with Weweb and it worked.

For the sake of your product, please do something.

Has anybody been able to actually configure a conditional action when a text stream’s status is “waiting”, “streaming”, or “done”?


  • I’m trying to set up a very basic conditional action for “when text stream is streaming” in order to scroll to the latest entry in a Repeating Group.
  • I’ve tried a plethora of configurations, but the action never triggers.

Lmk if anybody has resolved this…

@rutvij.bhise are you sure the custom event bug is fixed? I’m still not seeing streamed results when an openAI API call comes from within a custom event. The exact same sequence directly in a button click workflow streams nicely into the same elements.

1 Like

Related Question - I understand that API key can be set to private, ( and its easy if its just working with 1 key )

We are calling an API directly from a button click (client-side workflow) and based on the workspace, we need to dynamically set the API Key.

Client-side / frontend users should also not be exposed to the workspace’s API key when calling the API.

Has anyone successfully set up secure streaming API calls with OpenAI (or any streaming service) while dynamically populating the keys and at the same time keeping the API key hidden from users ? Not sure if this can be done on the client-side workflows.

(due to how we need to dynamically set the API key and to keep it secure) checking if anyone managed to setup backend workflows and streaming in real time to client side.

Would really appreciate any insights :folded_hands:

1 Like

I’ve learnt (the hard way) that if you have a group nested within a ‘text stream’ group it’s highly likely any related workflow actions will fail from the child group (store in the database, convert md to html, set state). If you remove the child group from the text stream parent, workflow actions related to the streamed run / full text (etc) will then all magically work. I have submitted this to bubble as a bug.

1 Like

@rutvij.bhise - I’m still experiencing this bug, where streamed results are missing if called from a custom event

2 Likes

Me too, even if I add the stream as an output parameter in the custom event

ah ok - so it’s not just me. @rutvij.bhise - do you know if this is going to be fixed?

1 Like

Has anybody been able to successfully setup a conditional action to scroll to an entry of RG when text stream is streaming?

Thank you, exactly what I needed.

1 Like

hi all, would you mind sharing links to your app editors so I can share with the team to take a look? Here or in DMs is fine.

For custom events keep in mind that server-side custom events run async and need to be completed. Meaning you might be losing your API call return there

1 Like

At this stage, has anyone been able to stream from a custom event successfully and display the streaming data in an RG just like it is done for a functional chat assistant system?

I have a setup that runs from a custom event with either OpenAI or Claude running when there’s an error with either of them. The text stream data returned from this custom event is displayed to a hidden element and made to display in the last cell of the RG when streaming is “yes”.

I have tried so many times, but streaming doesn’t work and the workflow delays to display the full text instead.

Has there been anyone able to implement streaming successfully in their chat assistant apps?

was this fixed? im having trouble doing it

1 Like

Does anyone know if Bubble supports pre-streaming of reasoning type text?

ChatGPT has a payload like this:

{
“responses”: [
{
“type”: “reasoning”,
“message”: “Processing your request. Please wait a few seconds while I gather detailed information.”
},
{
“type”: “stream”,
“content”: “Here’s the first streamed chunk of your response…\n”
},
{
“type”: “stream”,
“content”: “Followed by additional information…\n”
},
{
“type”: “stream”,
“content”: “Finalizing your detailed response.\n”
}
]
}

Basically, for our custom API response, we want to show behind the scene processing steps before the actual stream starts to improve the UX - as it takes a while before the Agent finishes processing the input and starts streaming (sometimes 8 to 10 seconds) - and hence we want to show something like this (similar to how ChatGPT does it).

Has anyone tried something like this?

no, still does not work

Hi! I have a question: currently, is it possible to use Gemini and have streaming at the same time? Thanks!

So this was released not really working at all and minimally tested? Seems par for the course. I can’t get it to work with custom events and the waiting, streaming, done states don’t work whatsoever.

I think I am experiencing a similar issue. I have a group that is of datatype text stream, and uses a call to the chatgpt API as the data source. I have a custom plugin element that inherits the text stream from the parent group (so that it can optimistically process the JSON and return a valid result even if the JSON isn’t yet complete).

However, even though I can see that the text stream was successfully returned from the API call, nothing inside that group can inherit the text stream.

What makes this stranger though is that this was working at one point when I first built this feature, before suddenly not working. I was wondering if anyone was experiencing this now and knew of a way to fix it.

I’m not sure I understand you’re goal clearly enough to provide any insight on a potential solution