Customized GPTs are great, but there isn’t much flexibility on how you can design and interact with them via ChatGPT.
Thankfully, the Assistants API is here to just solve that. It’s highly customizable. You can build your own chatbots and use them however you want within your Bubble application.
I am making a series of tutorials to help you navigate your way through setting up the OpenAI Assistants API.
There are three parts to the process
Creating an Assitant (GPT)- This can be done on the Open AI playground or via the API
Interaction with the GPT- Threads, messaging, and running the Assistant
Front end in Bubble- Displaying the interaction in your Bubble app
The first video in the series focuses on the Interaction part. How to setup the API calls in Bubble to send and receive messages.
I’ve kept it very sharp and to the point.
Check it out.
Stay tuned as I’ll be covering the other two topics very soon.
Hi Zubair, thanks for creating these videos. Would you be open to sharing your editor or providing screenshots of the workflow & api calls? I had initially set up the calls and workflow a slightly different way, but was unable to get the RG to refresh, even after a user submitted a message, prior to the assistant returning a response. If I manually refreshed the page, then the RG content was correct, and included the user’s message and the assistant’s response. I tried setting things up the same way you did, and am still running into the same problem with the RG refresh not happening, my ‘get run’ api call doesn’t seem to show a single run ID (it returns a list), and my notifications aren’t working as part of the ‘do every second’ workflow.
Thanks!
-ben
Scratch that, I actually got it sorted last night.
It’s actually surprising how quickly the responses come back using gpt3.5 turbo. i’d say the speed is good in one sense (users aren’t left hanging, wondering if the chat has hung) but the speed is almost too much and feels unnatural. I’ll likely add a pause and a lottie animation to make it look like the assistant is typing, then let the response come through. thanks again for this videos, much appreciated!
For other people, I’ve made the app public read-able
However, I had to revoke the API key so the app wont work.
But the editor view is visible. Not sure how useful that would be.
In an ideal world, there should be a way to show the editor of the app but without the API keys… But even then. OpenAI is expensive and bill can start adding up.
thanks! definitely still helpful even without the API key. i think one of the things that I got tripped up on was the ‘list messages’ call. i initially had it as a data call rather than an action (and then used that as the data source for the RG), but that’s likely more due to my inexperience with API calls. thanks again for sharing!
Hello,
Thank you for your advice.
I followed your instructions in the first video, and I don’t understand the steps in the second.
My problem is the following, after following your first tutorial, I am unable to generate an answer to a question related to my API assistant.
It shows me the descriptive text of my wizard, instead of providing me with an answer.
What am I missing? How do I edit my action button so that it works?
Thanks a lot!
Hi, I have the same problem. Please tell me if you managed to solve it. I ran several tests and realized that in :each item’s content :each item’s text value mode it displays according to the following algorithm if you make 2 queries
Query 1
system text <request text 1
Request 2
system text <request text 1> <response to request 1> <request text 2
I see they just introduced streaming to assistants within the last month or so. Any idea if they’ve also enhanced it, so we no longer need to keep polling?
Thanks! Streaming is a nice UX, but was thinking more in terms of whether with this update, if we can do away with the constant polling, which is continuously checking for a response. So basically still use the original method Zubair demonstrated, but without the polling. Then we get the efficiency of not having to send large snowballing payloads to OpenAI and also do away with the wasted WU it takes to keep polling open AI.
Meu amigo, que Deus abençoe imensamente a sua vida!
Pessoas como voce merecem todo o sucesso do mundo, desejo prosperidade em sua trajetória.
Muito obrigado!!!
If anyone is still looking for a solution to this, I’ve built a much simpler way to build a full-fledged chatbot with the OpenAI Assistant that can be published to websites, here’s a video on it: