Hey @paul29 , thank you for constantly adding new features. I couldn’t find the documentation on how to use it and connect the chatbot to a repeating group. I appreciate your help.
It is coming this week. I am a little behind on it. Was aiming to release that by the end of this week but there is one final issue I need to sort out.
HI Phill,
Thanks for alerting me to the fact it is now available. I will get that in today or tomorrow at the latest and then update the plugin
This has been updated. You should see the models there now in the latest version (5.5.1)
Hey everyone
Version 5.5 has been released which includes all of the chatbot features. Here is the first video in a 3 part video series. This first part is just the features. The second part (coming tomorrow) will go into the bubble editor to give a general idea of how things are built out and the third part will explain how to build it out from scratch (coming end of this week).
Hi guys,
I just pushed a new version (5.5.2) that has xAI’s LLM called Grok
Also, you can also add Flowise override config json object now. That can be done in the Call LLM action show here:
and would have this format:
{
“openAIApiKey”: {
“chatOpenAI_0”: “sk-my-openai-1st-key”,
“openAIEmbeddings_0”: “sk-my-openai-2nd-key”
}
}
Hi @sazadi and @luis.angel.cip please see my most recent post. This feature is now available.
Hey @paul29 thanks man, really appreciate your support. you are the best
Hi Everyone and happy Friday,
I have just released a new version which includes GPT Realtime. If you’re not sure what GPT Realtime is, have a look at this post and scroll down to the “How it works” section:
Introducing the Realtime API | OpenAI
How to use it:
Step 1: Connect to GPT Realtime:
Step 2: Start recording user audio
Step 3: Stop recording user audio:
Step 4: Run the same action in step 1 but pass in “DISCONNECT” as per the documentation:
IMPORTANT: Your API key is not protected with GPT Realtime (this lack of protection is specific to GPT Realtime, all other functionality still protects your key). Given that GPT Realtime keeps an open connection with the client device, in order to protect the key, I would have to keep that open connection with my server which would be very taxing on my server with multiple people using it at once. If you want to still protect your API key, you will need to spin up your own server and put the URL to that server into this field here:
Langflow integration is coming up next
I was going to release n8n along with this recent release but I found out that n8n doesn’t support streaming which is disappointing.
As always, let me know if you have any questions
Hey @paul29 , Im not sure I’m doing something wrong or its a bug but please look at the screen shots I sent. I separate the user,assistant message with the field role, so if the role is user (which is the message that the user sent) it should appear on the right and the assistant messages should appear on the right. the problem is sometimes when I receive a message from the server it registers to the user row in the database. I follow your instructions and pass through both user and assistant records, but sometimes I receive it as the user record. so the first image is the correct form and the second image is incorrect.
Also sometimes I receive a weird message instead of a response from my agent. I believe it came from your server, because I can’t find it in my history of the chat

*I used Claude in this case.
Hi @sazadi
Are you able to send me screenshots of the associated actions an elements so I can help debug?
Thanks,
Paul
Hey @paul29 , I stop using it for now bro (I had to finish the preview of my app), I might delete some of the workflows, but I can reproduce it next week if you want.
thanks man
If you have some time to reproduce that would be great. I don’t like having bugs in my plugin.
Thanks,
Paul
Hey @paul29,
Does this plugin still use an intermediary server?
Hi @mikhaeel
Yes it does. Bubble still does not support streaming. This is the only way to get it to work.
Please let me know if you have any other questions.
Thanks for the reply Paul! Two quick questions:
- If requests run through your server, is there anyway for me to be certain that my API key (and any information within my requests) are safe and secure?
- How do I know that your server will always be stable and functional? What if you get too many concurrent requests?
- I have been offering this service for about a year now. You are technically putting your trust in me to keep it secure. The only thing I can offer as reassurance here are my friends in the bubble community (@AliFarahat, @jagdish_bajaj and @NoCodeAdvantage) who I know quite well who could vouch for me. Additionally, in terms of LLM tokens, they are cheap these days. There really is very little value in trying to steal other people’s tokens.
- Two reasons:
- I use this service all the time for my own apps so even if I didn’t have any income, if I let the service go down, then I would break my own apps.
- This is income for me. I don’t want to lose the revenue.
Hope that helps.
Thanks for answering Paul. Is there a particular threshold of requests that might cause stability issues. Just want to ensure that I can count on the service to work without hiccups if I get a sizable amount of traffic.