Server is back up. Infrastructure upgrades coming today to ensure this doesn’t happen again. Very sorry to all the plugin users who were affected 
Hey there! Thank you for fixing the problem so quickly - I really appreciate it!
I was just wondering if there is a way we can use our own servers dynamically?
Thank you so much!
@Shan - support for custom server URLs is coming this week 
Love this! One question though, how does the plug-in handle maximum token lengths? Do we need to manually limit the body length?
If I have understood it correctly, the ‘Display Message’ comes from the ‘Message History’?
@jaos.pcl - Glad you like it!
Currently the plugin is handling maximum length by sending the last 8 messages by default. You can also set a smaller number when you set the “Message History” in the “Send Message” action, if say you want to limit token usage and be sure not to go over. But soon there will be some more flexible and robust options, so that you don’t have to worry about going over, and can pack more history into the allowed context length. This last part will probably be done using a couple of tools/ideas from LangChain, to summarize long conversations (e.g, ConversationSummaryBufferMemory — 🦜🔗 LangChain 0.0.155).
Thank you for your response!
Can the history actually be longer than 8 messages, so the plug-in ignores every message above 8?
It would be great if we could have the option to separate the source to the display messages and history, so the user can view the whole conversation. As the history messages are limited to 8, so are the display messages? Am I right?
EDIT: Forgot to say, would love the feature of summarizing long conversations!
The history can be as long as you like. But for each request, currently only the 8 most recent messages are sent. You could have a 500 turn conversation though.
I see.
It appears the system message doesn’t get sent if there are more than 8 messages. Do we need to create a new system text every time when there is more than 8?
Ah, that’s a good catch. It sends the last 8 messages by default, so if the first message is a system message, and you’ve got 9+ msgs, it wouldn’t be sent.
A short-term fix would be as you suggested, and set the SM each time. I can fix it in the backend to be more robust, so that it automatically sends the most recent system message. Working on stability upgrades atm; will implement this soon.
Hi all,
Just published some important stability updates with 5.7.1 
-
Connection handling is now more robust, which helps if your app is getting a lot of traffic -
New Workflow Action - “Ensure Connected”. You can run this before running “Send Message”, to ensure the connection is open. Not strictly necessary, but adds stability for long user sessions.
If you find any bugs or issues, please let me know!
Hi all - two more notes about recent updates:
-
First, apologies to all for the service interruptions. The traffic continues to grow faster than I’ve been able to keep up. Which is of course fantastic, but has brought some growing pains

-
Second, please update to 5.7.1 if you’re using this plugin in production, and please use the “Ensure Connected” event before the “Send Message” event. This will result in a much more stable user experience.
I’ve just upgraded the servers again to handle the traffic spikes, so I’m hoping we’ll see things become a lot more stable over the next day or so.
Thanks!
Thank you!
Hi all,
For those following the stability/scaling issue, there have been some significant improvements made over the last 2 days, and more coming over the weekend. The plugin is becoming more performant and reliable every day 
Thanks to everyone for your patience this week, and your help in finding and fixing the bottlenecks! 
Hey all,
For anyone who may have experienced the service interruption earlier today, the issue has been fixed. Sorry! I’m hoping we’ve just about rooted out all the possible ways the system can break. If you’re curious, we’ve hit all of the following failure modes in the past 2 weeks:
- Redis/cache overload
- Network I/O limits
- CPU throttling
- RAM limitations
- File descriptor limits
All have now been addressed, and we’ll keep patching/improving as we find new scaling pains.
A Short-Term Roadmap
Here are the improvements coming over the next few days:
-
Run without 3rd-party server - connect directly to OpenAI, so you don’t have to worry about data privacy issues. This has been built already, and I will release it tomorrow (May 8) after more thorough testing. -
Health Check - easily check if service is connected, so that you can use a fallback option, like … -
Non-streaming response Fallback - In case streaming is unavailable, fallback to a standard wait-for-response API call. The goal for the plugin is to never have this event trigger, but also to never have a request not complete because of some connectivity issue. -
Custom OpenAI endpoints - Support for OpenAI on Azure and other APIs exposing the ChatGPT models. This will be experimental.
Other features in the works:
-
Code Syntax Highlighting - Auto-detect and format code snippets, as in the official interface -
System Message Improvements - Checkbox to always send most recent system message, so you don’t always need to set it. -
Better Message History Handling - Send more complete message history context, with fewer tokens, using conversation summarization from LangChain.
Let me know what else you’d like to see in this list!
Hey,
Thanks for the plugin - it’s great!
However, I am sporadically facing bugs when I send it the message history and add a prompt on top of that. It seems to give a garbage response in return.
I have also recorded a loom video to showcase the fact that I am sending the message history correctly and it is still returning a garbage response.
Loom link: Loom | Free Screen & Video Recording Software | Loom
I had shared this with my team and am facing some heat because of these errors. I tried pretty hard to debug this - can you please help out here?
Hi Hrishikesh,
Thanks a lot for letting me know about this. I’ve just sent you a follow-up email. I’ll take a look at this this morning and get back to you shortly once it’s resolved. Stay tuned!
Thanks Korey!
If anyone else is currently seeing this issue, check to see whether you are running “Set Message History” in the same workflow as you are calling the “Send Message” action. Currently, this sometimes cause the message history not to be sent, because the generation starts asynchronously, before the “Set Message History” action has had time to run. This appears to be the case especially in “Debug Mode”, as Bubble will pause the workflow steps (ie the Set Message History action), but not the plugin action.
I’m looking for a solution to this, but in the meantime, you can do the following:
Create a custom workflow that calls “Clear Message History”, and call that custom workflow from your larger workflow. This will force it to be called when you want it to.
You can also try setting it in a different workflow than in your “Send Message”, for example, when a user reloads a previous conversation, or on message generation completion, etc…
Thanks again for the pointer & help debugging this Hrishikesh!
Hi all,
There is currently a bug in the plugin causing all uses of “Send Message”, after the first, to fail with the error below.
I’m looking into this issue now, but don’t think it’s directly an issue with the plugin, as nothing had changed before the break, so I have opened a support ticket with Bubble.
In the meantime, you can upgrade to 5.9.0, which has a new action “Send Message to OpenAI Directly”, which talks directly to OpenAI, instead of the backend server, and has most of the same functionality.
This was a little rushed to release, given this current break, so please let me know if you encounter any errors.
Thanks for your patience everyone!
Sample error message:
Plugin action ChatGPT - Send Message error:
Error: Cases weren't exhaustive: {"role":"user","content":"Tell me about Bubble.io"} }
at exhaustive (/var/task/util/util_lambda.js:142:9)
at get_wrapped_pw (/var/task/util/util_lambda.js:154:12)
at /var/task/util/util_lambda.js:181:47
at Array.map ()
at Object.get_wrapped_list (/var/task/util/util_lambda.js:181:35)
at LambdaListWrapper.get (/var/task/plugin_api_v3.js:54:40)
at eval (eval at build_function (/var/task/util/util_harness.js:38:12), :19:61)
at eval (eval at build_function (/var/task/util/util_harness.js:38:12), :135:8)
at /var/task/plugin_api_v3.js:250:27
at run_fn (/var/task/u.js:550:18)
Error: Outer Error (see above for inner error)
at Block.wait (/var/task/u.js:399:33)
at await_promise_on_fiber (/var/task/plugin_api_v3.js:206:25)
at LambdaListWrapper.get (/var/task/plugin_api_v3.js:54:12)
at eval (eval at build_function (/var/task/util/util_harness.js:38:12), :19:61)
at eval (eval at build_function (/var/task/util/util_harness.js:38:12), :135:8)
at /var/task/plugin_api_v3.js:250:27
at run_fn (/var/task/u.js:550:18)