New LLM streaming plugin!

I also think we may be missing a field vector_store_id to store all the files uploaded by the user in the current thread

There isnā€™t a file_id field on the upload files endpoint:
image
Those are the only two parameters, so Iā€™m not sure what youā€™re issue is. What last couple screenshots are you talking about? Can you please be more specific?

I havenā€™t added the vector store collection yet. Iā€™ll do that within the next few days. On the regular file upload, there is no vector store parameter. You are looking at the upload file endpoint underneath the vector store collection

1 Like

Hey @paul29 - admittedly Iā€™m a somewhere between beginner and novice so this probably should be more obvious to me but itā€™s not, even after reading documentation and watching videos. Can you give me the simplest or simplest pseudo-actions for GPT assistants? All I really want to do is to

  1. Send a prompt from the UI
  2. Process it through my assistant (create thread, message, run, list message)
  3. Display it in a HTML editor I use

How would you do this using the actions of LLM? I have tried to create thread and then message via the open ai api, and then use LLM streaming to create run and list messages, but Iā€™m only getting the user prompt back. Iā€™ve seen some screenshots where people do this all within the plugin, and start with the generate token action, is that recommended?

At the end of the day, just looing for the simplest explanation of happy path to get me on the right track. Thanks!

PS - if this is in the documentation and I should have seen it / understood it, just point me to it and Iā€™ll study harder.

Hi @dballiet
No problem. Happy to help.
This is the simplest way to get streaming with gpt assistants using the plugin:



image

The exposed state from the plugin called ā€œstreamed responseā€ will contain the streamed response. Just populate a text box element with this expression and test and your assistants response will stream in.

The tall picture got reduced down in size and made it all blurry. Here are the important parts:
image
image

Please keep us updated on this :slight_smile: .

Hi @betteredbritain I have just pushed an update (5.20) which includes the vector store endpoints. Bubble varies on how long they take to approve a new update. It will be some time in the next 24 hours

Thank you, Paul, for helping me with the plugin. Youā€™re fantastic!

No problem. I need your flowise url. You can direct message me if you want to keep it private

2 Likes

Yes, Iā€™ve sent it to you.

Hi @chatmdapp I have implemented the plugin for you to stream in flowise. If you ask a question, it will show the streaming value in a textbox placed randomly in the middle of the page. You will need to work this into your UI so it works with your repeating group. I can see some issues with your set up. A friend of my has a template coming out that will show you the best way to set up an AI chat widget. Please let me know if you have nay more questions.

1 Like

I truly appreciate your work. If anyone has doubts, I highly recommend purchasing the LLM Streaming pluginā€”you wonā€™t regret it. Thank you again, Paul!

1 Like

Hey @paul29 Thanks a lot!

1 Like

Hi @paul29, I was curious if you could push out a tutorial on creating messages with files, storing them in vector stores and such. I think just one example would suffice.

Iā€™m having trouble doing something relatively simple. In an existing thread, I want to create a message with files ā€¦ previously I thought I could simple use the create message with files action but after going though some forum pages on OpenAI, its apparent that you need to upload the file first, store it in a vectore store and then query it. Here is the issue The element action we have for upload fileā€™s purpose is assistants which is problematic for me because when a user uploads a file, I donā€™t want them adding to the knowledge base of the assistant just the relevant vector.

Is my thinking off maybe?

Again 1 example of a a message created with files and stored in a vector store with all the relevant steps would be great if you have the time at the moment.

Can you point me to the forum thread you found. I was under the impression you could just use ā€œcreate message with filesā€ as well.

I understand that if you want to add files to the assistant, then you need to store teh files in a vector store first.

Just want to make sure weā€™re on the same page here.

I believe this is the thread: Getting Attachments to work - API - OpenAI Developer Forum

It also mentions something about attaching the vector store to a particular thread which makes sense to me on paper, I believe we donā€™t have that thread field under the ā€˜create a vector storeā€™ actionā€™ as of yet.

Just read through the thread. I believe this is your problem right here (Iā€™m assuming your assistant has files attached to it which is one vector store and then when you create a message with a file, it is trying to create a separate vector store which according to below is not possible. Correct me if thatā€™s not whatā€™s happening on your side):

The yellow is the problem, the red is the solution (assuming my assumption above is correct). You will need to set up your RAG through a different provider and then pass that as a tool to GPT assistants. I havenā€™t done much work with tool use so I wonā€™t be able to recommend somethign off the top of my head. Youā€™ll need to do some research but there will be plenty of tutorials on youtube of how to set up tool use with assistants

1 Like

I see, this definitely sounds like a problem. Iā€™ll see what I can do.

1 Like