[Updated Plugin] ChatGPT with Real-Time Streaming (No more timeouts!)

I think I mis-explained it a bit. I do not want to keep the empty records. I want to keep what they initially type in and the response. I.e. User types in input, then there’s an output that’s saved, and I’m able to see what it was. If they then delete that conversation, then it turns into a blank record.

I’d like to KEEP what the conversation was BEFORE they delete it. Does that make sense? Also, deleting empty chats every now and then wouldn’t be a bad thing to do either.

Not to me but it’s late here :sweat_smile:

As I understand it:

  1. The user’s messages and the assistant’s responses are saved correctly
  2. When they delete the conversation, you don’t delete the Conversation thing, instead you clear the list of messages
  3. This last part - keeping the conversation - I’m lost there, I thought the user wanted it deleted!

Your user deletes a conversation, but why do you want to keep an extra copy of it somewhere else? What’s the purpose?

Mainly to always go back and see what people are asking so I can understand what they’re using the tool for.

Okay, so don’t delete the conversations. Instead, have a field ‘hidden’ on the Thing which is of type yes/no. When the user deletes the conversation, set ‘hidden’ to ‘yes’.

Set up your privacy rules for Conversation so it’s: Only when This Conversation's User is Current User and This Conversation's hidden is no. Then, the user will only be able to see the non-hidden conversations.

You can also modify the search in the RG that displays the list of conversations to use the ‘hidden = no’ constraint (in case you change the privacy rules around later).

Then, the app admin can still view all of the conversations (assuming you allow them to with privacy rules) even when they’re deleted.

Bear in mind there might be some general user privacy / best practice implications in this. As a user, when I click delete, I expect it to delete and be completely gone within like 30 days.

1 Like

Hi all,

I’ve got some great news: I’ve just finished version 6 of the plugin :partying_face: It will be submitted to Bubble today for review, and should hopefully be available very soon.

It’s a complete rewrite of the plugin, which addresses known issues with the current version, and adds some powerful new features.

The plugin now includes utilities like extracting text from file uploads, web search, and Bubble-native vector storage and search (for Retrieval Augmented Generation).

I’ll be renaming the plugin ChatGPT/LLM Toolkit to better reflect these additional capabilities, and this will be a focus moving forward.

If you have any ideas for what should make it into the next version, please let me know! The next features on the roadmap are support for Claude2 (a large-context LLM from Anthropic), and a code/markdown auto-formatting element.

Here’s the bit of the plugin description page that describes these new additions. A video tutorial for the new version will also be coming soon.

Stay tuned!


:fire: INTRODUCING: The NEWLY ENHANCED ChatGPT/LLM Toolkit Plugin! :fire:

Level up your ChatGPT experience with groundbreaking updates and powerful additions. We’ve not only rebuilt the plugin from the ground up but also packed it with premium features that cater to your every need.

:rocket: What’s New?

    🌐 Web Search Capabilities: Built in web search enables you to extract relevant information from the web in real-time.

    📂 File Uploads: Seamlessly extract text from uploaded files (.pdf, .pptx, .docx, .csv, and many more types). It’s easier than ever to add your own data sources to chats.

    📌 Vector Storage & Search: Super-charge your app with “Retrieval Augmented Generation”. Introducing Bubble-native vector storage and search, so that you can search through your data without requiring Pinecone, Weaviate, or another third-party vector database service.

    🧬 Embedding Generation: Convert text into embedding vectors effortlessly, paving the way for efficient future searches.

:arrows_counterclockwise: What’s Improved?

    📈 More Scalable Backend: We’ve optimized our system architecture to ensure a seamless experience, eliminating down-time.

    🔍 Enhanced Error Handling: With clearer and simpler mechanisms, we’ve made it easier to troubleshoot and navigate unexpected events.

    📌 Custom Headers: Tailor your requests better with the new option to add custom headers.

    ⚙ Comprehensive API Parameters: Fine-tune your requests using all available parameters, including frequency penalty, presence penalty, and logit bias.


Can you share more about how this works?

It uses the Bubble DB for storage, the OpenAI Embeddings API, and similarity search is done by the plugin. What are you wondering about? :face_with_monocle:

When a file is embedded, where exactly is that saved? In a specific data type with the embedding + the relevant content?

I suppose I’m confused how you can vector search a SQL database - if we have the embeddings stored on a data type with one Thing for each vector, are you just brute forcing by calculating the similarity of every result of the database search and finding the top N most similar?

Sorry if I’m missing something obvious!

Great updates by the way.

You’re mostly correct - text and vectors are stored in Bubble together (although this isn’t strictly necessary, but it’s probably the simplest pattern for most folks that want to use this feature).

For search there are some optimizations re: brute forcing top_n matches.

Also note that you can do Bubble-side filtering of your dataset, so instead of having namespaces or metadata filtering like Pinecone provides, you can just pre-filter your result sets based on some criteria (say a certain user’s data, or data with specific tags in your DB, or whatever); ie, you don’t need to run similarity search over every piece of data you have if you don’t want to.

I’m looking forward to seeing how the algos perform with large datasets.

I’m also currently experimenting with providing a vector-native storage backend as part of the plugin, to make this more efficient, so if we hit bottlenecks too soon, that’ll be coming.

1 Like

Hi! Thanks for the update! Seems cool) I wanted to ask you, will it have a feature to extract data by adding links to websites. For example, lets say i want to make a rewrite of articles on some websites.

Also in other bubble plugin, three is this feature which support OpenRouter.ai

Yep I figured it’d be something along the lines of Do a search for all the chunks you want to be queried. It does make filtering easier.

Just concerned about the performance client-side calculations (particularly on mobile for example). Chunks might be 150 words, sometimes I’ll get people uploading 1,000 page PDFs with 500,000 words = 3,300 chunks, then they’ll have other documents in the category so it’s easy to need to query 10,000+ chunks of text. Testing will see if this works though :slight_smile:

If it’s efficient and performant, I’d consider moving to keeping things on Bubble apps rather than Pinecone but also I like to avoid using plugins for more than necessary just from a standpoint of futureproofing the app.

I experimented with the search client-side, but opted for server-side for the reasons you’re describing, so that shouldn’t be an isue :crossed_fingers:

Oh awesome. Any WU implications?

Also, is server-side Bubble side or on a Lambda server somewhere (i.e where is the data being sent/processed)

The vectors are processed by the plugin servers, but the text doesn’t need to leave Bubble.

1 Like

Hey @launchable,

When will the version 6 update be available?


@gabriel.guilhem - as early as tomorrow, hopefully! Made some last minute additions (like getting text from specific URLs and infrastructure improvements) that set it back a few days. Hoping to submit to Bubble tonight though, and hopefully it’ll be approved quickly :crossed_fingers:


Is message 2 (content) working? I’m testing it and it doesn’t recognize the message.
Ie: I’m sending “My name is John” and asking “what’s my name” without results.

Does anybody know if it is possible to run this plugin as a backend workflow? I want to schedule it to run a recurring function call once a day. For example, I want to summarize the trending financial news from a function call to Seeking Alpha once a day.

You haven’t set a role, have you tried that?


Hero! Thanks