Forum Academy Marketplace Showcase Pricing Features

[Updated Plugin] ChatGPT with Real-Time Streaming (No more timeouts!)

I’m pretty sure i have 3rd party cookies allowed (I checked browser settings and the ‘block cookies’ option was set to turned off). The issue still persists unfortunately. Screenshot attached.

@launchable thanks for the mention! :open_hands:
@robogpt you can use the web loader of fine-tuner.ai, happy to help setting it up!

1 Like

@mdajinani - just to be sure - are you on 5.12.15, and setting Connection Info in your workflow?

Yep,


5.12.5, and connection info included.

@mdajinani - Hm… strange. The service is definitely running, I just tested it.

Using Debug Mode, If you click Inspect, and find the Data Container in the element drop-down, does it have an ID, Connection Info, Message History, etc?

I was troubleshooting with another user today who wasn’t seeing any results because the Data Container was hidden. You could also check in Inspect mode that it’s not Invisible, b/c if it is, it won’t load data. (note that the element actually is invisible, so you don’t need to hide it)

You could also check that your API key is being set correctly.

If none of those three things turns up any answers, try clicking through the workflow in “Step by Step” mode, under debug mode, and see if there are any more clues.

If you’re still stuck, happy to set up a time to debug via Google Meet / Zoom / etc.

Hey all,

Updates to servers have been pushed. Both the timeout issue and the “missing messages” issues have been improved on the latest version (5.12.15). There are still occasionally cases where nothing comes back; this is an issue with timing out on the OpenAI API. The next update will address this, by timing out sooner and auto-retrying, so this hopefully doesn’t happen at all anymore.

Let me know if you’re seeing any issues, old or new. Hopefully things are getting increasingly solid! :signal_strength:

On another note - I’ve got a design in progress that doesn’t require the third-party cookie. If the cookie is a deal breaker for you, please let me know. Trying to find out whether it’s necessary to redesign away from this.

I followed your steps. The inspect element showed the container is there and connected (screenshot) and it’s visible in the Design view.

The workflow also has all the required fields, including my Open AI API key.

I’ve also attached a screenshot of the step by step debugger that shows what’s happening.

Happy to debug this over a call with you today if you wish (bear in mind I’m still learning Bubble!).

As soon as I switch back to 5.11, everything works fine and I get results as expected.




Update:

I tried reverting the plugin to all the 12.x versions and got the same error. Things work as soon as I get to the 11.x versions.

Having said that, I did somehow manage to get the 12.5 working once or twice. I have no idea how - I just kept refreshing the page and trying. Here’s the debugger for when it worked. I don’t see anything different compared to the earlier screenshot.

Finally, I think it’d be better to avoid the third party cookies thing as some users may have cookies blocked or be using browser extensions or ad blockers.

1 Like

Okay, good effort debugging. I’ll DM you to set up a debugging call.

Also having no joy with 5.12. Sticking with 5.11 for now.

@sacoetzee - ahh, this is perplexing.

To debug, another thing to try is put a text element on the page, and have it’s content/source be Data Container’s Connection Info. If the text stays empty on page load, it means that the the cookie isn’t being set. If the text populates, it should work. As part of this testing, also ensure that cookies, especially third-party cookies, are permitted.

My guess is that there is too much variability with people’s cookie settings and browser behavior, and a non-cookie-based setup will be needed (which 5.11 uses, and 5.13 might as well, if we can’t resolve this in the next couple of days).

Yes seeing issues sometimes that messages are missing, just keeps loading

Hey @launchable, I think your editor might benefit from a more updated tutorial and clearer error responses . I cannot get the plugin to work but I’m not sure why. Here’s the screenshots:


image

1 Like

@nocodejordan - Thanks for the feedback! You’re right on both fronts. I’ll put out a new tutorial video tomorrow, and I’ll clarify the various error responses. Stay tuned!

Thank you @launchable. I would also appreciate if you could have some explanation of the following values which can’t have documentation like in the workflow step. I am especially curious whether the last request token usage records the prompt usage or the total token usage.

@nocodejordan - The “last request token usage” is the total (input + output). I’m planning to add 2 more things though:

  1. an action to calculate input tokens, before sending a request to ChatGPT,
  2. and to change the value of “last request token usage” to be broken down by input, output, and total.

I’ll also double check to see if there are other actions missing documentation.

Cheers

Hi @launchable, thanks for the plugin. I have subscribed and begging to revamp my app with your plugin today.

Is there please way to use the “n” OpenAI parameter to ask for multiple variants of the response? I’m sending a long system prompt with each user prompt so having the ability to ask for multiple responses would mean large token savings on my side…

@hejtmanekp - glad you’re finding the plugin useful!

I haven’t added support for the “n” parameter yet. The way I’ve been doing multiple response streams is to have multiple Data Container’s on the page. But you’re point about cost savings re: the input prompt is well taken; I hadn’t thought about it from that angle.

I’ll do some more digging and see if there’s a way to support it.

Cheers

Hi all,

Quick note: trying to debug the newest version with a user today, we bumped into issues that seemed linked to Safari. If you’re building/testing, and seeing issues, please try different browsers and let us know what you find. I’m looking into what might be going on.

@launchable thanks

btw, is there pls a way to access the token counts returned by the API?

@hejtmanekp - there is a “Last Request Token Usage” state, that will show the total tokens used for the last request (input + output). I’ll soon also be adding a more fine-grained count, and a “pre-call” count (ie., you can get a token count for input messages, before generating)

1 Like