So you’re all good?
If so, might be a weird bubble bug that took some time to sync. Not sure.
Let me know if you have any more questions.
So you’re all good?
If so, might be a weird bubble bug that took some time to sync. Not sure.
Let me know if you have any more questions.
Yes, everything’s fine now. It was probably Bubble’s temp error.
Hi @paul29 ,
I selected GPT-4o in the Assistant and “Generate Tokens” but the Streamed response is empty.
When I change “Generate Tokens” to GPT-4-1106-preview and keep Assistant with GPT-4o, it works fine.
I’ll look into this in a bit
I just looked into this and gpt-4o requires an upgrade to v2:
Migrating from v1 to v2 - OpenAI API
This will take a couple of days to update. Will be done by the end of the weekend. I will respond back as soon as it’s complete.
This is specific to assitants only. Regular streaming or server side calls will work with gpt-4o
I have pushed a fix. You do not need to update the plugin. This is a backend fix. gpt-4o now works with assistants. @ruimluis7
Hi Paul,
Great tool. I am gonna subscribe. I read in you documentation that you are planning to include crewAI. Any date on which this would be availaible?
Best,
benoit.
That’s great to hear @benoit.schiepers .
I am aiming for end of next weekend. Implementing this feature is a little harder than I had anticipated so is taking a bit more time than I had hoped. I will keep you updated on the progress
@benoit.schiepers I’m a bit ahead of schedule. Should be done by Tuesday.
Thanks, I just subscribed. has it been updated yet? i don’t see any crewAI actions. Also, it would be great to get instruction videos for that specific feature.
Best,
Benoit.
Hi @benoit.schiepers
Thanks for subscribing. I am working on one final bug that’s preventing me from publishing. I should have it done in the next few hours. I will post back here as soon as it’s pushed.
As for the video, yes, definitely will get a video made by the end of the week and posted to my instructions page, but as for right now, there are some more basic instructions posted there. Here’s the link again for your reference. (make sure you click on the “CrewAI Instructions” button)
LLM connector demo (bubbleapps.io)
I will include buttons that have the actions attached to them so you can see what to do in what order but I can’t do that until the plugin gets updated as that app is not my test app.
@benoit.schiepers I have just pushed the new version live (v4.2.1) which includes CrewAI functionality. You can follow the instructions on the preview page.
Please let me know if you have any questions with the setup and I’ll be happy to assist.
Please keep in mind that this is the initial release and the error handling for this feature has not been full implemented yet as I wanted to get the feature released. I will implement more in depth error handling to make it easier to determine what issues exist in the implementation.
Hi there …
I have a bit of an issue. Particularly with ‘calling the LLM.’
This is the desired behaviour for the relevant page:
i.) At the top of the page - I will have a button that shows a popup that have a list of assistants that the user has not interacted with perviously (has no thread associated with the user) in the format of a repeating group.
ii. User can click one, thus creating a new thread associated with the current user, and the current assistant. Once an assistant has a thread which is associated with the current user, it will appear in a RG on the left side of the page.
ii.) user Can then select the desired assistant (with an existig thread) in the RG and retrieve the thread for that assistant and that user and start interacting with it.
iii.) when user clicks an assistant in the RG, we need to display the thread history on another repeating group on the right of the page, which is updated each time user creates a prompt and the relevant assistant replies through the stream element.
Query - Where should I place the LLM stream element on the page/subgroup that containes thread histrory
Hi @betteredbritain
Sorry for the slow response.
Yes, a link to your editor would be great. Feel free to dm me.
But to try and answer your questions:
Where should I place the LLM stream element on the page/subgroup that containes thread histrory
The stream element itself doesn’t contain any information about the thread history. The stream element is just there to give you access to the “call LLM” action
In order to get the thread history, you need to use this action:
![]()
Should I use the ‘pass through fields’ and if so how (tutorials don’t really cover this).
What should the type of content be on the page groups
not sure how to retrieve the ID attribute for the RG element that displays thread history
Hi Paul. Thank you for this great plugin. I am facing a strange error, only with gemini-pro. All the other LLMs are working well. I have no error message to provide you apart form the message below:
Thank you for the detailed reply Paul … I sent you a dm
HI @adrien.charles75 glad you like it.
Are you able to send me a screenshot of your action settings? If you’re doing a streaming call, I will need to see the generate tokens action and the Stream action
HI @betteredbritain
I took a look and refactored your workflow to make it much more efficient and easy to follow. You will see the changes I made.
The plugin was not the issue. I have set up a temporary text field at the top which displays the output of your assistants response.
You have a repeating group with no data source:

Bubble will not show any content in an RG if there is no datasource. You need to set this data source to be the the data api call that comes with the plugin to list the messages as I mentioned above.
Just wanted to see if you got everything working @betteredbritain