Hey Everyone,
So I am sure everyone is aware that their is no easy way to stream OpenAI responses currently. So what we currently have to do is create a Run in OpenAI, and then poll the endpoint until the response is marked as “completed” or “failed”.
I’ve filmed a video on how you can set this up using as little workflows as possible, and also how you can get the responses from your assistants back in a fixed JSON structure that you can actually work with in Bubble.
I also released a plugin that handles the polling for you if anyone wants to use it, but hoping this video helps you set-up your backend polling process in a more scalable way at the very least.
Good Luck!
Plugin Link: OpenAI Poller | APG Software Solutions