ChatGPT: The AI Plugin You've Been Waiting For!

Hey Super Bubblers!

We are thrilled to announce the release of our newest free plugin: ChatGPT - AI Complete Toolkit! OpenAI have only days ago released the ChatGPT API, and our team at The Upstarters has been working tirelessly since then to bring you this groundbreaking integration, and we cannot wait for you to try it out.

You must have heard of ChatGPT right? It’s the world’s most powerful AI language model. Our plugin allows Bubble developers to integrate this state-of-the-art natural language processing into their Bubble apps. With ChatGPT, you can create chatbots, virtual assistants, language translators, and so much more.

Imagine being able to develop an app that can understand and respond to user inquiries in multiple languages, or an app that can assist with complex tasks like legal or medical advice, or an app that can analyze huge bodies of complex data. ChatGPT can help you achieve all of that and more!

We know that AI is the future of technology, and we’re excited to be on the forefront of this innovation with ChatGPT. This plugin is designed to make AI accessible and easy to integrate into your Bubble app, regardless of your level of experience with AI technology.

Try out the demo here: Bubble.io Plugin: ChatGPT - AI Complete Toolkit | The Upstarters

(you’ll need your own API from OpenAI, link provided).

Install the plugin from here: ChatGPT - AI Complete Toolkit Plugin | Bubble

We can’t wait to see what you create with it, so do let us know!

Good luck,

Ali Farahat
CTO @ The Upstarters

10 Likes

I’ve been testing it out, it’s so much fun and much faster than GPT-3! ChatGPT’s model is called ‘gpt-3.5-turbo’ and it’s a significant upgrade with more to come (which we will of course update this plugin for).

1 Like

amazing and at the same time frightening technology. I wonder what civilization will come to.
But I’ll have to try your plugin.

2 Likes

Hi there

I was wondering if the chatgpt can have multi prompt conversations that consider the previous conversations? So like, I can not give the full context for the follow-up question and they be able to understand and provide an answer?

Cheers
Max

2 Likes

Yes, that’s one of the differences from GPT-3. Try out our demo page from the link above, with your API key and you can type a 2nd/3rd etc message without previous context into the chat and get the result you want. It’s pretty cool!

The plugin/API sends a “user” prompt and gets an “assistant” response, and this conversation can go back and forth and get longer and longer. OpenAI have done it in a way that is fairly straightforward and our plugin makes it pretty simple to control this.

The token limit of both prompt and completion is 4096 for the time being. The model limits the tokens, not ChatGPT apparently, which is a complicated way to say ‘the model will cut off the completion response at 4096 tokens even if ChatGPT wanted to keep writing’.

The cost of the ChatGPT 3.5 API is 10X cheaper than GPT-3!

This is at the time of writing, and we will keep building & updating as things change.

2 Likes

Got it! I went ahead and try to set it up, but struggling with this workflow, could you please explain how to get this workflow?

Thanks!


Awesome!

1 Like

I don’t think this is correct.

What I saw was 4096 for both prompt + completion.

Can you link to a source.

1 Like

Hey, yes sorry you are right on the token limit point. I was operating under some incorrect info there, apologies. I have updated my post above to reflect the correct info.

1 Like

Hey @enbitious

Please see the sceenshot below

As you can see the API call to ChatGPT picks up data from the repeating group state chatLT (This can be substituted with a database instead or anything else) and formats the data into a format that the API call accepts.

The field messages requires an array input (A list separated by a comma ,). The list needs to be a list of objects, so for this we will need to format as text

The general format is

{"role" : "VALUE", "content" : "VALUE"},
{"role" : "VALUE", "content" : "VALUE"}

The plugin wraps the values above in a [VALUES] square brackets. So in essence the final value looks like this

[
{"role" : "VALUE", "content" : VALUE_formatted as JSON-safe},
{"role" : "VALUE", "content" : VALUE_formatted as JSON-safe}
]

We use Bubble’s formatted as JSON Safe to handle characters that need to be escaped. Also notice how we dropped the " double quotes from the formatted as JSON Safe as Bubble adds those automatically

You can also checkout the Open AI documentation for more information https://platform.openai.com/docs/guides/chat/introduction

1 Like

No worries. Glad to have helped.

How did you handle the inevitable outcome of very long chats?

When the chat content passes the token limit, the API call values will get cutoff to fit within the limit – which will result in an error.

Great implementation of the chat functionality on this one guys.

Just wondering if there’s any guidance from OpenAI as to why the responses from the 3.5-turbo model are still far less robust than the public-facing ChatGPT page? For example, I tried a random prompt in your sandbox:

“Write an ad for a pickle company in Tarzana, California.”

I get the exact same answer every time I try that prompt in the sandbox - even after refreshing the page.

"Looking for a delicious and tangy snack to satisfy your cravings? Look no further than Tarzana Pickle Company! Our pickles are made with the freshest ingredients and are handcrafted to perfection.

Whether you prefer dill, spicy, or sweet pickles, we have a flavor for everyone. Our pickles are perfect for snacking, adding to sandwiches, or even as a garnish for your favorite dishes.

Plus, our company is based right here in Tarzana, California, so you can feel good about supporting a local business. Try Tarzana Pickle Company today and taste the difference!"

Trying the same prompt twice in the public-facing ChatGPT page gives:

  1. "Looking for a tasty and healthy snack that packs a punch of flavor? Look no further than our pickle company in Tarzana, California!

Our pickles are made from the freshest, highest-quality ingredients, and we use traditional recipes to create the perfect balance of tangy, salty, and savory flavors. Whether you prefer a classic dill pickle or something a little more adventurous, like our spicy pickled carrots or garlic-infused pickled mushrooms, we’ve got something for everyone.

Not only are our pickles delicious, but they’re also a great source of probiotics and other beneficial nutrients. Plus, they’re low in calories and fat, making them the perfect guilt-free snack to enjoy anytime.

So why settle for bland, boring snacks when you can have the best pickles in Tarzana? Try our products today and taste the difference for yourself!"

  1. "Looking for a delicious and authentic pickle experience? Look no further than Tarzana Pickle Co.! Our pickles are hand-crafted in small batches using only the freshest ingredients and time-honored recipes.

Located in the heart of Tarzana, California, our pickle company has been satisfying customers for years with our mouth-watering selection of pickles. From dill to bread and butter to spicy, we’ve got a pickle for every taste bud.

Our pickles are made with locally sourced cucumbers and a unique blend of spices, giving them a flavor that is both bold and refreshing. We use no preservatives or artificial colors, ensuring that each bite is as healthy as it is delicious.

Tarzana Pickle Co. is committed to quality and customer satisfaction. Our pickles are made with love and care, and we believe that it shows in every bite. So if you’re looking for a pickle that will knock your socks off, come visit us at Tarzana Pickle Co. and taste the difference for yourself!"

+++

There’s a pretty clear difference in the quality of the responses, even with the upgraded model. Just curious to see if they’ve given any reasons why that might be.

Also, any idea why trying to same prompt multiple times after screen refreshes gives the exact same response on the API call?

Did someone say, “Pickles?”

3 Likes

Are u using api connector? Bubble caches the api responses…

1 Like

Any way around this?

Nice work @AliFarahat
I love this!

Great, thanks for this, and also just a quick question, what if I give it a very hard question, and then it does not have enough time to process and generate a response before bubble times out? How can I show an error message like openai does when a response cannot be generated and bubble timed out? Thanks.

I

1 Like

Add a random value like the datetime.

Not available in a demo but I may expose it in a few days or so is the ability to modify the temperature and the top n value

That will certainly modify the replies and not have them repeat. Although I’m not quite certain if bubble is doing any caching of this on their end! Maybe they do

Thanks @AliFarahat - that’s super helpful. You guys have done a great job on this.

1 Like