[Updated Plugin] ChatGPT with Real-Time Streaming (No more timeouts!)

Seems like sometimes I’ll get no response. Just a blank… a new response (with the user icon following the bubble demo where even/odd index shows the chatbot vs human icon). So I’ll see the chatbot icon, but no text in the text field. I try messaging again, and I do get a response as if it’s normal again. Any ideas on why that generally might occur, or what things to try to minimize the odds of that happening?

1 Like

We really need it to be completely reliable, a 10% error rate (which is what it feels like) is way too high. It’s been a few weeks of this now

1 Like

Hi - Is it possible to integrate the plug-in with a vector database such as Weaviate to ensure all messages and conversations can be stored there? Vector databases provide long-term memory for LLMs such as ChatGPT.

1 Like

Yes @tpolland , I did it with FlexGPT.. I created a template that allows you to set it up very simply too: TEMPLATE: CustomGPT - train your own GPT chatbot

If you want to create your own from scratch, the documentation for this template might help you see how it can be done

1 Like

Is “xxxxx” the actual prompting? So… Would I say "using the user’s message below, do (dynamic) “Input 1 (API)” which is where my back-end prompting is stored?

**Edit Input 1 (API) is not where my back-end is stored. So I really don’t know how I’d do it. For example, I have a tool that stores a sample prompt in a field called “Prompt Start (API)”. How would I access this in the system message?

*****Edit Edit: This is what I have. Is this what you’re talking about?

@gulbranson.nils - you could try that, see what you get. You could also try something like:

  1. Message 1 -
    ** Type: System
    ** Content: "Using xxUSERMESSAGExx, do “Current Page Tool’s Prompt Start (API)”.
  2. Message 2
    ** Type: User
    ** Content: “xxUSERMESSAGExx: Input A’s value” (assuming Input A is the textbox where the user types.

Note that you’re specifically typing in xxUSERMESSAGExx, or something similar. That’s static text, not dynamic. It’s meant to be a tag, or indicator of where the model should look for the inputted user data.

Or a simpler way is just have 1 message, of type user, and set its content as the following:
"
Using xxUSERMESSAGExx, do ‘Current Page Tool’s Prompt Start (API)’

xxUSERMESSAGExx: ‘Input A’s value’.
"

I would try both with and without a system message, see which works better. You don’t always need the system message to get the right sort of response, especially if your instructions are fairly short/simple.

Regarding the occasional empty responses (no text returned) from chatbot, I was trying to make a hack workaround to detect this and try again when it occurs.

I was wondering if the “Clear Current Message of ChatGPT-DataContainer” can be used for this?

Not sure if I know what that means (I guessed it might delete the most recent message, which would then be cleared from the list of texts in the repeating group), but when I tried to use it it seems to do nothing to that effect.

Can you clarify what clearing current message of chatgpt-datacontainer does?
Any other ideas for how to eliminate the experience of the occasional empty response?

Thx!

@davewliu - the “Current Message” is the message that is currently being streamed (or just finished being streamed). Clearing it won’t remove the last message from Message History or Display Messages (which are typically what you’d want to show in a RG).

A possible workaround is to have a delayed event, that executes every 2s or so, when some state is true, and then do something if Current Message is still empty.

Something like:

  1. User clicks “Generate”
  2. Clear Current Message (probably want to wrap this in a custom event; see above)
  3. Set Custom State “waiting” to “yes”
  4. Have a workflow even that triggers every 2 seconds when “waiting” is yes. Check if current message is empty.
    4a. - If “Current Message” is empty: have that workflow do something (call Fallback action, show alert, re-run workflow, etc.)
    4b. - If “Current Message” is not empty, do nothing, or do something else. Set “waiting” to “no”

This is a horrendous hack lol, and shouldn’t be necessary once all the kinks are worked out. Just about ready to release an update which will hopefully see these failures go away :crossed_fingers:

awesome to hear we’re an update away from this issue. thanks so much for the amazing support.

1 Like

will try this in the interim

getting an odd error, seeing the text ‘cookie’ get placed near the beginning of my first streamed GPT output. not sure what i changed if anything. any hints as to why this might be happening and how to fix? thanks! sorry for the questions

Working on that now, bug from the latest update. Should be fixed soon. Sorry!

1 Like

Really thankful for your amazing work :slight_smile:

1 Like

Hi all,

Have just released 5.12, but it has a serious bug: please do not upgrade yet!

Will post full update message shortly.

1 Like

Hi again all,

ChatGPT is currently down. You can check here for status updates: https://status.openai.com/

EDIT - it’s back online again

1 Like

Hi all,

Just released the first version of the stability upgrades, with version 5.12.5 :partying_face: :european_castle:

This is a major upgrade to the backend infrastructure of the plugin, and should (I hope) take care of the issues we’ve been seeing with connections and responses failing.

These upgrades are effectively what we’ve been calling “version 6.0” above, but I’m releasing them as a minor update first, so that we don’t have to wait for another 2 days or more for plugin review. Once everything is completely stable, I’ll release it as 6.0

I’ve also updated the previous/existing backend, so even if you don’t upgrade the plugin version, you should see more stable behaviour than before. On the same note, if you upgrade and run into issues, you can roll-back to 5.11.

To update your workflows, you just need to fill in the “Connection Info” field of “Send Message w/ Server” and “Stop Stream”; see screenshot below. Everything else should work as before without modification.

PLEASE NOTE: This is still experimental. I think I have tested it fairly thoroughly, but I have very likely missed some things that won’t pop up until we’ve got thousands of users active at the same time. If you have lots of users in production, you may want to wait a couple of days before upgrading, just to make sure we’ve worked out all the kinks.

Also note - This latest version requires a “Third-party cookie” to be set in the browser (the “third-party” in this case is just the plugin, and not your app’s domain). Lots of people turn these off (including myself). For context, this is what allows the plugin to know which server to communicate with. Currently if these are not available, the request will fail. I will be releasing another update soon so that if this cookie isn’t set, we’ll use the “backup” server. So the cookie won’t be strictly necessary, but will be required for increased reliability/performance, and you shouldn’t have to worry about whether your users have them disabled or not; it should just work.

If you find any issues with the updated version, please let us all know here! That’s the quickest way to get things fixed.

Good luck, and have fun with the robots! :robot:

(cc @georgecollier @hugors00 @jaos.pcl @davewliu @sonlovin)

3 Likes

My apologies, I am new to this type of SaaS stuff does “xxUSERMESSAGExx” mean something dynamic or is that static? I will try to explain the situation better so I can understand it fully. Thank you for helping :smiley:

For simplicity, let’s say that my Prompt Start (API) is “write me a poem on the following topic and tone”

The user would use 2 inputs. A topic, and a tone. So, on top of my Prompt Start (API), there is Prompt before Input 1 (API) and Prompt before Input 2 (API), being “topic:” and “tone:” respectively.

Would those also need to be dynamic inputs? My confusion is coming on whether or not “xxUSERMESSAGExx” is dynamic or if I am putting in the hard “write me a poem on the following topic”

I hope this makes sense… If it doesn’t and I am being confusing then I apologize! It’s just the feedback I’ve gotten for my app is that it takes too long to load and I’d like for this to be implemented :slight_smile:

1 Like

@gulbranson.nils - no worries. This stuff is tricky.

I think about it like templates. You have a message template, with fields that get filled in. (this is one way of doing it; there are others)

So your message could be something like this:

Write me a following poem on the following TOPIC in the provided TONE:

TOPIC:
{dynamic data here, from input or database}

TONE:
{dynamic data here, from input or database}

The stuff that’s in { curly braces } you would replace with the blue, dynamic value options in bubble. (from your database or an input box). The rest is hardcoded. So in the above example, xxUSERMESSAGExx is hard-coded, just as a tag to indicate what role the next piece of text plays.

You can use many different formats for your tags. I think OpenAI recommends something XML-like, sort of like this:

<user-message-tone> Playful </user-message-tone>

<user-message-topic> Financial markets </user-message-topic>

The text in between the tags can be anything. You just want to use some way to mark text as having special significance for the task you’re trying to accomplish. GPT is quite robust/flexible in interpreting templates like this, so you can experiment and see what feels natural.

Does that help?

Really appreciate all the work you’re doing on this plugin.

You’ve touched on this already earlier in the thread but I wanted to wrap my head around it to be sure, and to check if things have changed with more recent updates.

For my system prompt, I’ve given the AI a list of instruction to follow in sequence. With the new update, will the AI be better at following these steps and sticking to its system instructions, even if the message history is more than 8 messages long?

And, as I understand it, message history is limited to 8, right? Is there a way to extend that? I’ve run into instances where the AI starts to repeat a step in the process again because it’s forgotten it’s already covered those parts of the conversation.

2 Likes

Not sure if it’s something on my end, but the 12.3 update broke the plugin for me and nothing would stream. I added the connection info as instructed.

I reverted back to 5.11 as advised and things are working again.