[New feature] Native support for API streaming

Quick question though — will the new handling for “manually enter API response” support streamed responses coming from non-Gemini APIs , like a backend built with Xano ?

Right now I’m testing chat streaming from Xano into Bubble, and even though the API sends incremental JSON or text chunks correctly, the Repeating Group stays empty. Would love to know if this update might help with parsing streamed arrays or objects progressively in Bubble.

Thanks again for the improvements — excited to test the rollout this week!

are you referring to the issue you mentioned here?

Just based on the screenshots I’d say the return data wasn’t parsed as a JSON. Can you confirm you are returning a parsable JSON from Xano? If you can confirm you are, then please send me a sample response from Xano so I can pass it to the team for triaging

Yes exactly — that’s the issue I’m referring to.

:white_check_mark: The Xano endpoint returns a valid JSON array like this:

json

CopierModifier

[
  { "id": 64, "question": "Hello", "render_answer": "Hi there!" },
  { "id": 65, "question": "What's up?", "render_answer": "All good!" }
]

When I test the API call in Bubble’s API Connector, the response is correctly parsed and I can see the array.

But when I try to stream that response and bind it to a Repeating Group — even via manual text split or setting the content type — the RG remains empty. It feels like Bubble doesn’t parse the streamed output into a usable list for RGs.

Happy to share the full response or setup if that helps the team debug!

Hey @fede.bubble,

Can you please have the team look into the bug that causes streamed text to appear empty when ever it is transferred anywhere. Whether it’s transferred (displayed) to another group or saved to the database, it’s always empty.

Please see a quick video demo of the bug here

Please note this bug exists for “text so far” or “full text”.

did you submit a support ticket? I think that would be helpful.

Tip: Add an animated cursor or blinking-thing to following ahead of your stream content. This helps to show action to the user. Add it as a conditional during the stream.

Hi There - just want to check in on this. Was this issue fixed?

fix is in engineering review, should be pushed to live for everyone by Friday at the latest

Thank you very much! You’re amazing!

Hi, I hit a roadblock yesterday trying to implement streaming with functions from an assistant and then stumbled across this thread. I’m wondering if there’s a timeline for being able to handle function calling and streaming from an assistant? Super keen to move away from the plugin approach i’m currently having to use

If you’re referring to the Assistant API it is recommended to migrate to the Responses endpoint instead as Assistant will be removed in 2026. It’s clear that Responses is OpenAI’s flagship API now.

good to know, thanks Matt. I wasn’t necessarily referring to the Assistant endpoint, more the handling of functions when an assistant is called. I think this restriction would apply regardless of Assistants or Responses endpoint choice.

e.g.

  1. If a streaming API is pointed at the /threads/runs endpoint and initiated with a generic ‘hello’ message then you get streamed event responses (event: message.delta etc).

  2. if you want to have the Function calling events to handle internal database searches and send values back to openAI you need to initialise with a prompt that will trigger openAI to call your function (as defined in the assistant setup on openAI).

The issue i’ve found is that you can’t have event responses for 1 & 2 mapped to response fields in the API. The initialisation process supercedes any previous initialisation meaning you can have 1 OR 2, but not 1 AND 2. Which really limits the functionality of what you can achieve through a streaming API.

I think it was mentioned further up this thread, that manual field definition was coming at some point soon.

Of course, if I’m wrong I’d be more than happy to be told so!

Has anyone had this issue?

Setup

  • API Connector call to OpenAI with “Stream” turned on.
  • Displaying the chunks in a Repeating Group (works great).

Problem
The moment I add a workflow like:

When Text Stream is streaming → Scroll to entry in RG (Messages, last item)

the stream stops behaving as a stream—Bubble waits and delivers the full response only after generation is complete.

Removing that single conditional makes streaming work again.

image

Thank you for Streaming API updates.
I am currently developing a real-time streaming service by integrating an external API (custom-built with FastAPI) using the Bubble.io platform.

In the API Connector, I have configured the API call to use the “Stream” data type to receive Server-Sent Events (SSE) responses. On the server-side, the API adheres to the standard SSE protocol, correctly sending UTF-8 encoded JSON data with the Content-Type: text/event-stream; charset=utf-8 header.

The issue I’m encountering is that when calling the API through my Bubble app and receiving the streaming response, Korean characters occasionally appear broken or corrupted. This issue occurs intermittently, seemingly at random, and is not specific to any particular condition or environment.

Importantly, I have not encountered this issue under the following testing scenarios for the exact same API endpoint:

  1. Direct testing via the “Try it out” feature on the FastAPI server’s /docs (Swagger UI) page.
  2. Receiving the SSE stream using a separate test client written in Python with the requests library.

In both of these test environments, all Korean text data was received and decoded correctly.

Based on these observations, I suspect that there might be an intermittent issue within how the Bubble.io API Connector handles or parses SSE responses when the “Stream” data type is used, potentially related to character encoding or data chunk assembly.

I would like to inquire about the following:

  1. Are there any known limitations or bugs with the Bubble.io API Connector’s “Stream” data type when processing UTF-8 encoded SSE streams, especially those containing multi-line JSON data within the data: field?
  2. Are there any alternative configurations or best practices you would recommend within the API Connector setup or my Bubble app to mitigate this intermittent Korean character encoding issue? (e.g., using the “Text” data type instead of “Stream” and manually parsing the SSE on the client-side).

Could you share any best practices for reliably integrating external SSE streams with the Bubble platform?

Any information or advice you can provide to help resolve this issue would be greatly appreciated.

Thank you for your time and assistance.

Hey everyone - in case it’s helpful, I just posted a pretty comprehensive tutorial on setting up a streaming within a ChatGPT-like interface.

Anyone else watch Matt’s videos even when they think they know how something works? :rofl:

Great stuff

Another bug : The streaming works for me if you call the API from a button and display the stream in a text field.

It DOES NOT work (no output) if the API call is automatically open from the Page Load event. In that case the desitination (text fied) do not receive any stream.

@fede.bubble Checking in to see if the custom event fix has been pushed

Interesting — it actually works for me.

I’m seeing the stream correctly when triggered on Page Load and displayed in the text field.

I use the chatcompletion API from openai. I have to move (copy paste) the API call and the Display data out of the page load event to make it work.
Could it be that it works for you bc you use Response ApI ? strange