Has anyone successfully setup the realtime streaming for OpenAI’s GPT API’s? Not to be confused with implementing the API call. Specifically, I am seeking how to setup the real-time streaming functionality.
Nah in all seriousness, you got it running in bubble by creating a plugin right?
If you’re seriously stuck on implementing the OpenAI APIs (which are as simple as they come), then you need to go back to the Bubble tuts about Spotify. Same difference.
I’ve got the API working. Very easy. I assumed the streaming function was not possible through the API connector.
My problem that is different to OP is that my big 2000 word request cause the response to take too long and seems to time out bubbles api connector…
But I guess where this connects with OP is that if i had streaming working then my problem dissolves.
@Keith Can you demonstrate and expand on how you got it to work
I believe there was some confusion regarding what streaming the OpenAI GPT call meant. So I wanted to post this video to demonstrate it. If anyone has been able to accomplish this in Bubble, any direction would be greatly appreciated.
Second @keith comments as calling OpenAI’s endpoints is very straightforward.
The OpenAI API endpoints have a capacity number of tokens of 4096. Think of this as a means to see a big universe through a telescope., The 4096 tokens are what the telescope allows us to be able to access the more than hundreds of thousands of tokens that their LLMs (large languagee models) currently handle.
The arquitecture to build a memory to an AI app that is built conventionally or in any other way … i.e. … one of them being a Bubble app … requires the use of embeddings. These are actually vector embeddings (numbers that represent a semantic meaning) that can be handled by conventional databases but for best performance via vector databases such as Pinecone (that house the vectors which represent the combination of tokened-embeddings along with their text metadata).
Here a few resources that may prove useful:
In all, what I meant to suggest is that to build expanded functionality into an AI app one requires an arquitecture that includes most of the above components in order to function adequately.
Hi @cmarchan can you tell me which part of your response speaks specifically on how to implement the streaming functionality in Bubble?
If anyone is able to demonstrate the streaming functionality in Bubble, reach out to me. I will pay you for your time in consulting on how to implement it.
Hello @Stackapp !
I have not yet dealt with it. Just wanted to contribute with further information.
I did not mean any ill.
Has anyone been able to implement the streaming functionality in Bubble?
Yes. Everyone. Cue FOMO.
@keith Please list some links with Bubble apps that are currently utilizing the streaming functionality. Tons of GPT plugins but not one is using streaming functionality. If you can point one out that uses the streaming, I will close the thread and put this topic to rest.
This is the thread I’ve been searching for! @keith taking your lead here please
Has anyone been able to integrate langchain with bubble.io. Seems like this could be the missing piece
Hi, was anyone able to achieve the streaming element of the api in bubble? I’ve set up the api for completions but can’t get it working when adding the stream = true parameter.
I don’t have extensive experience with APIs some of the vids provided have gone over my head a little/lot.
Hack - call the API as you are, use typewriter effect to make it look streaming. a little lag but oh well.
I’ve recently published a plugin that does it. Not linking here b/c I don’t think I’m allowed to, but you can find it under “ChatGPT with Real-Time Streaming”.