New LLM streaming plugin!

So you’re all good?

If so, might be a weird bubble bug that took some time to sync. Not sure.

Let me know if you have any more questions.

Yes, everything’s fine now. It was probably Bubble’s temp error.

1 Like

Hi @paul29 ,

I selected GPT-4o in the Assistant and “Generate Tokens” but the Streamed response is empty.
When I change “Generate Tokens” to GPT-4-1106-preview and keep Assistant with GPT-4o, it works fine.

I’ll look into this in a bit

I just looked into this and gpt-4o requires an upgrade to v2:
Migrating from v1 to v2 - OpenAI API

This will take a couple of days to update. Will be done by the end of the weekend. I will respond back as soon as it’s complete.

This is specific to assitants only. Regular streaming or server side calls will work with gpt-4o

I have pushed a fix. You do not need to update the plugin. This is a backend fix. gpt-4o now works with assistants. @ruimluis7

Hi Paul,
Great tool. I am gonna subscribe. I read in you documentation that you are planning to include crewAI. Any date on which this would be availaible?

That’s great to hear @benoit.schiepers .

I am aiming for end of next weekend. Implementing this feature is a little harder than I had anticipated so is taking a bit more time than I had hoped. I will keep you updated on the progress