New LLM streaming plugin!

Hi Everyone,
I have just pushed a new version that allows you to stream with Flowise. Flowise is an unbelievable tool that I have started using a ton. If you don’t know it, there is an awesome series of tutorials here:
Flowise AI (2024) Tutorial - YouTube
I strongly recommend you check it out.
If you build a Flowise chatflow and it can stream (certain flows can’t stream based on how you have constructed them), then the plugin can stream it in. If it can’t stream, then the plugin will return the response all at once.

As of right now, you have to host Flowise yourself, the guy doing the tutorials in the above link shows you how to do it for free:
How To Access Flowise From ANYWHERE - Flowise Tutorial #9 (youtube.com)

There is a feature request for Flowise to be multi-user. Once they release that feature, then you will be able to get access to Flowise through the plugin without needing to host it yourself. Unfortunately there is no information as to when they will release this feature but as soon as they do, I will be updating the plugin.

I just realized that when I specify a major release, then bubble takes longer to approve it. You might not see version 5.0.0 available for 24 hours or so. No idea how long bubble takes to allow this through.

Hi Paul, just curious if you have any hypotheses on why the elements behave that way.

Sorry but I haven’t gotten to it yet. I will try to take a look tonight.

2 Likes

So the issues mentioned, i’ve been able to address for the most part.

Just one issue at the moment
i. When a message is created, the user prompt doesn’t get added to the list of messages unless the page is refreshed. The same is also true for the response, although it is retrievable immediately using streamelemnts A’s response.

Ideally, i would like the user prompt to show in the thread’s list of messages immediately after creation and the response to show in the list of messages once the streaming response is complete.

You will need to use the action “display list in a repeating group” and the list to display is the result of the step before which make another api call to get the list of messages in a thread. The reason it works on refresh is because bubble is making a new api call to openai and updating the list in your RG but just by generating a new message doesn’t automatically make a new api call to the list messages in a thread endpoint. You have to make that happen yourself

Hey Paul, I’ve tried this and at least the way i tried did not work. I’ve tried:
i. Retrieving thread as a result of create message
ii. I’ve tried retrieving message as a result create message
ii. i’ve tried modifying thread as a result of create message and then display list messages data data as instructed.
ii. ive tried simply displaying list after create message, with list message data data as a data source

None of it has worked, streaming components works perfectly but when it comes to updating the list of messages in real time nothing i have tried has worked thus far. If you have a working workflow that includes the mentioned action components i’d love to take a look in read only mode

Hi Paul, do you think you could add a feature within the plugin to ‘Get’ data from the relevant API when a certain condition is true? I think this is particularly important for AI assistants that require refreshing the api call everytime a conversation is ‘updated.’

this video is not amazing but it demonstrates the problem. I could try and simply do this myself, but I do not think i can insert my API key at more than 1 endpoint.

Hi @betteredbritain
This is a result of the way bubble caches API calls. Given nothing has changed in the call (i.e. thread_id is still the same and this is the only dynamic expression, bubble tries make your platform operate more efficiently by not making redundant calls). In this case, even though the call parameters are the same, you know the response is different, so we have to include a way to force the call parameters to change. By the way, this is not specific to Open AI calls, this is true for all bubble API connector API calls.

I have just released a new version: 5.0.1 which includes the ability to set a Cache-Control header on the two list messages API calls (data and action):
image

If you set the value to be the current date/time: extract UNIX, every time you make this call, the current date/time will update to a new value and therefore the call parameters will be different and therefore bubble will overwrite the previously cached results. This should solve your problem.

This works perfectly!

1 Like

Good to hear

Hi Everyone,
There will be 1 hour of server downtime at 11:30 PM EST August 5th (~25 hours from now). I need to upgrade the server with more space.
Sorry for the inconvenience.
Thanks,
Paul

Hi @paul29 … I am actually experiencing this problem as we speak. Here is some context:

When a file uploaders value is changed, I save the file to the database along with the fields, associated User, current pages thread as well as the file itself. As a result of the create new thing action, i set the state of the page to be the unique identifier of the initial steps saved file, i then use the value of that custom state (the file’s unique ID) as the file id when I “create a messages with files,” I still get the attached error despite the id being saved and sent to the api correctly (I believe).


You need to format the file_ids parameter as:
[“<file_id_1>”, “<file_id_2>”, …] (Do not include the triangular brackets. They are just standard notation)

I have added this documentation to the parameter. It will be there on the next release.

Hi everyone,
I just released a new version 5.1.0. This has the following features:

  1. Llama 3.1 models (as of right now only 405b is available to paying users only so unless you are a paying Groq subscriber, that model won’t work)
  2. Ability to extract JSON from the LLM response and parse as a bubble object. I will be making a tutorial video on this next week so stay tuned.
  3. Ability to make a client side call (action called “Call LLM without protection”). This allows you to not have to use the “Generate tokens” action to call the LLM but this comes with the downside that your LLM token will now be exposed in the network traffic, so use at your own risk (Note: Claude does not support client side calls which is why it is not included in the LLM provider dropdown).

Next release will include the ability to parse video with Gemini (this is the only LLM provider that has this capability).

Let me know if you have any questions

This is the format, i use for a singular file_id, I’d assume it be correct


However, i get the following error, am i doing anything wrong?

Thanks for the documentation

So i looked into the issue and it is an assistants v1 vs assistants v2 issue. The API documentation for how to send files for V1 is different than for V2 (I’m sure OpenAI had a reason for this but it’s still annoying when companies do this). I will make the update later today or tomorrow. If you’re in a rush for this, you can just create your own endpoint in the API connector by following the documentation:
API Reference - OpenAI API

Not too much of a rush … thanks for looking into it, i’ll just let you do your thing.

1 Like

I have just added in the action in version 5.1.2. There will now be an additional action called: Assistants (V2) - Create message with files
the release also includes the ability to upload video to Gemini. This is how you set it up.

The only issue (which is kind of a big one) is the 30 second limit bubble puts on server side actions. This upload action is slow and doesn’t take a long video for it to time out. I will be implementing a more robust solution in the coming weeks.

1 Like

Hey Paul, when the action LLM upload file is used we don’t have a field for file_id, while this does not impact the call LLM action, I keep getting null parameters for my file_id when trying to create a message with files. My hypothesis is that this is because we do not send this piece of information to OpenAI when uploading a non-video file in this specific use case. I am probably wrong, but just asking if perhaps we shouldn’t have a field for file_id when uploading.

If you have a look at my last couple of screenshots you will notice that I do not have trouble retreiving the unique id of the file as i save it as a custom state on the page. However, when i attempt to use this piece of information to create message with files, OpenAI doesn’t seem to recognize it