Is there a way to make text display as generated by OpenAI on bubble rather than as a block?
To further explain this, on ChatGPT, once you send a message, as the response is generated, it is sent, rather than sending the entire response once fully generated. Is there a way to replicate this experience on bubble? With this, the load times appear shorter.
As in streaming the response? Not as far as I am aware without some complex workaround. The API receives the response in one go, not a series of streams. This is for the completions API. What API are you using?
Thanks @georgecollier - I’ve tried using some of the typewriter plugins, however, they seem to work for static set of text. Curious if you have suggestions on a plugin that would work for dynamic text output generated via OpenAI?
Yes, the typewriter plugins only work with already provided text.
To stream the response (text streaming is the search term/keyword you’re looking for), you cannot use the API Connector. The API connector only returns one output, the entire text. There’s no way to make that happen faster in native Bubble. A different kind of connection is required (websocket) to get the text live as it generates.
Search the plug-in marketplace for AI Proxy, ChatGPT, Streaming and you should find something that works!