As of Wednesday April 5th, we have fixed an issue that was causing API calls to GPT-4 to time out, which resulted in an error message of “Could not connect to remote server.”
To address this issue, we have extended the timeout period from 50 seconds to 150 seconds to give API calls a chance to retry before hitting the 5 minute request duration limit. Calls that take longer than 150 seconds will continue to time out.
Plugin API calls will still follow the same retry policy as before (4 retries) but in cases where a request exceeds 150 seconds, subsequent retries have shorter timeouts in order to stay below the 5-minute existing limit.
If you find yourself continuing to run into this issue, please submit a bug report and we will take a look!
We’re building an app that uses agents and other complex loops before a response is recieved which can be almost 5 mins per call. Only 150 seconds mean we’ll have to drop bubble as the base for the app…
Rohan, I’ve seen tutorials where Make.com has been used to handle more complex API calls for GPT4 - so you can use that to interface from Bubble - depending on how much of your app is built already or switching cost you have.
Hey dude! Would love to chat about ways around this limitation.
Have you considered using an external tool like google cloud functions or xano or [insert other tool here]. Using gcp would be a much much cheaper option
Essentially you’d:
Create a thing in your db with the relevant user prompt and related info.
Send the prompt and it’s ID to a cloud function
Let the function process the api call to OpenAI
When the response returns, the cloud function would call a backend workflow/the data api to update the record accordingly
@jared.gibb FWIW that’s EXACTLY how I’d do this though I’ve had no incentive nor time to try it. Since Things and Lists have that sort of default “live query” connection to the database, it should mostly work like magic. I’d probably wrap the Bubble part in a plugin, just because that’s how I roll.
It is also worth keeping in mind, token/response length affects risks of timeouts.
If you can parse your requests into smaller pieces / chunks - the 150s time limit is less of an issue.
We are noticing slow down in our response time using ChatGPT 3.5 Turbo. It used to take up to 2 minutes for a task to be completed, now it’s taking more than 5 minutes for the same task.
The call should not wait for 150 seconds, it should finish quicker if the app gets a response…
My general suggestion from experience is to keep GPT asks short and concise. Break a job your app does into multiple tasks. This is a more complex way to go about it but you most likely will get better results and never time out as you
We do keep it short and concise and we do get amazing results without any timeout. Just after this update the response time has significantly increased. Only when running the call via a Backend workflow.
Is there anything needed to opt in? Calls from the API Connector plugin (I assume that’s what this is affecting?) are still timing out after 30s for me (which I had down as the timeout length, rather than 50s)