[Updated Plugin] ChatGPT with Real-Time Streaming (No more timeouts!)

Yes @launchable , I did all that but it was not working. I built one from scratch and I think I found my problem.

  • Before: I was using “Chat GPT - Send message to openai directly from a chat gpt data container”
  • Now: When I try that, it doesn’t work. But when I use “Chat GPT - Send Message w/server” it works.

That said: should I change my old setup to “Chat GPT - Send Message w/server”?

@joaquintorroba ahh, okay. In that case, you can set your api key in the Send Message Direct to OpenAI workflow, instead of the plug-in tab. That should do it.

Note though, that you probably only want to use that action if your users provide their own keys (ie, your not providing your key for everyone). Otherwise your key could be found by an unsavoury type. :female_detective:

If your are providing the key, definitely use Send Message w/ Server

Perfect, thanks @launchable .
One last quick question: How can I ensure that as the chatbot replies, there’s an auto-scroll to the latest response so users don’t have to manually scroll down? I’ve tried using ‘Scroll to’ without success – maybe I’m doing something wrong?

The way I’ve done it is to add an empty dummy element to the repeating group that’s showing the Display Messages. Then you scroll to that every .5 seconds or so, while the “Curently Streaming?” is true.

This is detailed more up above. It works quite well imho, but let me know if you have less luck with it.

1 Like

Thanks a lot, this solved it.

1 Like

Any updates on function calling? Still getting no response from OpenAI after a successful API call. is there maybe a timeout if the API takes too long to reply?

Hey @maurovz . Could be a timeout, but there should be a timeout error if so.

Can you follow up in DMs about this and a I’ll help troubleshoot?


Thank you for your continued dedication to building and maintaining this solution. Keep up the excellent work, and thank you for your proactive approach.


1 Like

Hi there,

I have noticed the following behavior in some user chats - each word in a response is duplicated. Is this something I am doing wrong in the setup?

I’ve seen it before but can’t replicate it. It’s not your fault, it’s a plugin issue.

1 Like

Hey @sacoetzee , are you on the latest version of the plugin?

And do you have 1 or multiple data containers on your page?

Only one data container and I am running version 17.7

Any way to add the OpenAI-Organization header to requests from the plugin?

@georgecollier - not atm, but I’ve got a new major version (6.0) nearly ready, and I’ll make sure it’s in there. Should be ready within a week.


Is there any issues with the gpt-3.5-turbo-16k model? We do not get any response.

Hey @hugors00 - I don’t think so, but I’ll look into this. Thanks for letting me know.

Edit - I’m not seeing any issues with this model :thinking:

Are you requests with other models all working fine?

One thing you could try is adding an error-capture event to the page, to see if an error is coming back. The only thing I can think of atm is that you might be getting a “model unavailable” error, if you don’t have access to 16k, but I think this is probably not the case, as I think 16k is available to every one.

What would be the best way to let the user know about an error? An example is occasionally when an input is typed in and “generate” is clicked, no text will show up. If that’s going to be the case is there a workflow/element that can be added?

I saw one talking about fallback messages but I’m not sure how that should be used… or how to check if the message is gonna fail.

@gulbranson.nils - currently errors all throw different events - e.g., if an API key is invalid, there’s an API Key error event. If there is a timeout with the API, there’s a timeout error event, etc.

So with the current version, you could set up events to capture each of these errors, and show them in an alert or a text element, etc.

In the next release (v6), this will be simplified, and there will be a single error event, and there will just be a different message set inside the Data Container. Then you can set up a single event to handle errors, and show the messages based on the message in the Data Container. Hopefully this will make it simpler to handle any issues that pop up (and hopefully with the rebuild there will be no errors to handle in the first place :slight_smile: )

Heyo is anyone else getting this or are my prompts just glitching :laughing:

@georgecollier - is your temperature set really high? Or are you manually setting a container ID?