Connecting Open AI Assistants API with your Bubble application

Customized GPTs are great, but there isn’t much flexibility on how you can design and interact with them via ChatGPT.

Thankfully, the Assistants API is here to just solve that. It’s highly customizable. You can build your own chatbots and use them however you want within your Bubble application.

I am making a series of tutorials to help you navigate your way through setting up the OpenAI Assistants API.

There are three parts to the process

  1. Creating an Assitant (GPT)- This can be done on the Open AI playground or via the API

  2. Interaction with the GPT- Threads, messaging, and running the Assistant

  3. Front end in Bubble- Displaying the interaction in your Bubble app

The first video in the series focuses on the Interaction part. How to setup the API calls in Bubble to send and receive messages.

I’ve kept it very sharp and to the point.

Check it out.

Stay tuned as I’ll be covering the other two topics very soon.

Comment below if you’d like more information.

If you like this video, please show some :heart:

Thanks
Zubair

8 Likes

Here’s part two of the series.

Setting up Bubble frontend workflows

Check it out!

What would you like to see next?

Thanks
Zubair

4 Likes

Hi Zubair, thanks for creating these videos. Would you be open to sharing your editor or providing screenshots of the workflow & api calls? I had initially set up the calls and workflow a slightly different way, but was unable to get the RG to refresh, even after a user submitted a message, prior to the assistant returning a response. If I manually refreshed the page, then the RG content was correct, and included the user’s message and the assistant’s response. I tried setting things up the same way you did, and am still running into the same problem with the RG refresh not happening, my ‘get run’ api call doesn’t seem to show a single run ID (it returns a list), and my notifications aren’t working as part of the ‘do every second’ workflow.
Thanks!
-ben

Scratch that, I actually got it sorted last night. :ok_hand:
It’s actually surprising how quickly the responses come back using gpt3.5 turbo. i’d say the speed is good in one sense (users aren’t left hanging, wondering if the chat has hung) but the speed is almost too much and feels unnatural. I’ll likely add a pause and a lottie animation to make it look like the assistant is typing, then let the response come through. thanks again for this videos, much appreciated! :pray:

1 Like

glad you could make it work.

For other people, I’ve made the app public read-able

However, I had to revoke the API key so the app wont work.

But the editor view is visible. Not sure how useful that would be.

In an ideal world, there should be a way to show the editor of the app but without the API keys… But even then. OpenAI is expensive and bill can start adding up.

Thanks
Zubair

3 Likes

thanks! definitely still helpful even without the API key. i think one of the things that I got tripped up on was the ‘list messages’ call. i initially had it as a data call rather than an action (and then used that as the data source for the RG), but that’s likely more due to my inexperience with API calls. thanks again for sharing!

Hello,
Thank you for your advice.
I followed your instructions in the first video, and I don’t understand the steps in the second.
My problem is the following, after following your first tutorial, I am unable to generate an answer to a question related to my API assistant.
It shows me the descriptive text of my wizard, instead of providing me with an answer.
What am I missing? How do I edit my action button so that it works?
Thanks a lot!

Hi, I have the same problem. Please tell me if you managed to solve it. I ran several tests and realized that in :each item’s content :each item’s text value mode it displays according to the following algorithm if you make 2 queries
Query 1
system text <request text 1
Request 2
system text <request text 1> <response to request 1> <request text 2

Thanks for the great tutorials!

I see they just introduced streaming to assistants within the last month or so. Any idea if they’ve also enhanced it, so we no longer need to keep polling?

Just noticed streaming has been released.

Great news.

However, I suspect this is trending in the custom plugin/code territory

My main worry is how bubble can keep the API key secure while supporting streaming.

From a glance at other forum topics, that may still be a challenge and this will need an intermediate server…

Don’t think I can make a ‘quick’ tutorial or simply suggest something for streaming.

I’m sure custom plugins will appear soon. However, please keep the key aspect in mind.

Thanks
Zubair

1 Like

Thanks! Streaming is a nice UX, but was thinking more in terms of whether with this update, if we can do away with the constant polling, which is continuously checking for a response. So basically still use the original method Zubair demonstrated, but without the polling. Then we get the efficiency of not having to send large snowballing payloads to OpenAI and also do away with the wasted WU it takes to keep polling open AI.

I have built this to provide a solution, keeping your keys secure.

1 Like

Excited to see what you and others can/will do with the additional streaming functionality, Z!
-jpb

Meu amigo, que Deus abençoe imensamente a sua vida!
Pessoas como voce merecem todo o sucesso do mundo, desejo prosperidade em sua trajetória.
Muito obrigado!!!

If anyone is still looking for a solution to this, I’ve built a much simpler way to build a full-fledged chatbot with the OpenAI Assistant that can be published to websites, here’s a video on it:

I’ve taken another stab at it to simplify everything and break things down step-by-step.

Here’s what’s included

  • Setting Up (API Connector Plugin, Open AI playground)
  • Creating the Assistant in Playground
  • Understanding Threads and Messages
  • Running and Managing API Calls in Bubble
  • Securing Your API and Preventing Prompt Leaks
  • Front End and Backend on Bubble
  • Streaming for Long Responses

Check it out here:

2 Likes