Interesting…
Could you confirm this @rutvij.bhise ?
Interesting…
Could you confirm this @rutvij.bhise ?
There’s a whole bunch of bugs with this streaming functionality. Where can I report these issues? Do I just post them here?
You can go to Bubble Support Center and use the chat bot to submit bug reports. Thanks!
Hey @Theodoros, Xano streaming doesn’t work, just like all streaming APIs that are not OpenAI standard. It always returns no streaming data to the group.
I’ve filed a bug report, but also dropping the issues here:
I recommend OpenRouter (https://openrouter.ai) to use any model with the OpenAI schema and hot swap models easily!
Hey @georgecollier. Appreciate the reply. Unfortunately I’m mostly trying to use Google Gemini and even though they have an OpenAI compatible API, it sucks pretty bad because it lacks bunch of features that are available in their normal API.
Yeah, you can use Gemini with OpenRouter (as well as OpenAI/Anthropic/DeepSeek/all other open source models etc)!
I know it doesn’t solve the root cause but at least gives a good workaround.
We use it as a best practice in our agency because using different API calls for each provider in Bubble is such a pain. They charge 5% on top of the model’s raw cost which is pretty reasonable.
Strongly recommend you take a look so you can use Gemini with OpenAI spec, tool calling, etc, and easily swap to any other model if you prefer just by changing the model ID and nothing else
Sounds good, I’ll give it a try. My guess, is that it’s still limited by Gemini’s official OpenAI API spec. But I’ll test and see. Thanks!
@fede.bubble , are you planning to add a “manually enter API response” feature? Or, will you roll out streaming-based enhancements to address the current limitations? If so, do you have an estimated timeline for these updates? Thank you.
May I re-iterate that this feature comes across as alpha (at best). It’s not a user’s role to rip features apart and submit bugs to support. Live features should be more robust than this, especially as we have so little control over the workarounds (use third parties / make our own third party app).
Please can we get an indicator if the developers are looking into making improvements?
I was so excited to get this working, but am now stuck waiting for support’s reply on how to proceed…haha.
This is a massive help!
Please let me know when they get back to you!
Does not seem to work if you are returning structured JSON from OpenAI, only basic text completions. Anyone else had this issue?
Just catching up on this and everyone’s insights - what a time to take 2 weeks paternity leave!
I can see there is some confusion surrounding streaming into a repeating group and saving the full text response to the database.
Here’s my approach:
Anything new about this issue?
I’m encountering the same issue — the response is being parsed as a generic event, so I’m unable to display any data.
@fede.bubble Could you check with the team to see how we’re supposed to handle this?
@fede.bubble any update on letting us modify the stream payload?
from the team:
we’re enabling “manually enter API response” for streamed responses, and updating the response handling to better handle the native Gemini API. both changes should be going out this week. More tweaks in response to the feedback in this post, and more informative errors when there are issues setting things up, following soon after.