Extending timeout period for API calls to GPT-4

Hmmm, that’s unexpected. This behavior should apply to everyone. Could you please submit a bug report for this?

Hi there. I have GPT4 API calls which are taking ~180 seconds to return, and so still failing with the 150 second limit. I’ve set these as backend workflows with the intention that my users can still keep browsing the site etc and the result of the API function will be returned and saved in their profile which they can view anytime. I can use GPT-3.5-turbo with much higher success rate (maybe 5% of requests timeout).

Is there anything else in the works that might allow for longer API timeout periods? Is there any suggested work around? Obviously I’m not an experienced coder (hence on a no-code platform) and so it’s frustrating that this doesn’t simply work, and breaking things into multiple smaller API calls and stitching everything back together etc really starts to push the “no-code” friendship a little :slight_smile:

1 Like

Hi @henry.dowling , I still have the 30 seconds timeout as well on API calls.
Also this is what is still specified here: Special Plugins - Bubble Docs.

Do we need to do something to adapt the timeout to 150 seconds?

1 Like

Hmm, if you’re seeing a 30 second timeout on API calls made through the API connector, that sounds like a bug. Could you please submit a bug report?

Hi Henry,
Are you sure this is a bug, because this is also what is documented in the bubble doc here: Adding API Connections - Bubble Docs

Hmmm, good catch. The docs are out of date here! We extended the API timeout period to 150 seconds. We will update the docs to reflect this.

So, to be clear, this behavior you’re experiencing is a bug—I’m sorry you’re bumping into this. If you submit a bug report we can investigate what’s causing this and look into a fix.

Is it a bug if our plugin which has an API call throws the 30 second timeout error?

I get this “2023-10-18T23:48:47.191Z f089bf64-2809-4a61-9204-20916e0422a2 Task timed out after 30.04 seconds”

@henry.dowling Any idea how large context windows will be handled given the new GPT-4-Turbo model? I assume that 150 seconds won’t be long enough to handle an API request that has 100k+ tokens with the new GPT-4 model. Would be awesome if we can handle the timeout parameter ourselves or optionally have no timeout at all!


I am constantly getting timeout by the 150 seconds limit. This makes it impossible to build complex AI apps in Bubble using GPT 4 natively.

I can’t imagine when GPT 5 is released.

Having latency limits means not being able to use the full capabilities that external APIs (like GPT4) offer all the developers in the world. In other words, by using Bubble, we are in a disadvantage vs other developers. This is against the entire Bubble’s vision, which is constantly pronounced everywhere.

Something has to be done, hopefully soon. @henry.dowling

Unfortunately, Bubble doesn’t have solutions for everything.

For streaming AI responses without timeout issues, I use a combination of:

Here’s a live example of the stack in action: Loom | Free Screen & Video Recording Software | Loom

Happy to answer questions. Feel free to DM.


Can we increase it further, double it even? 150 secs is not enough.

I support previous requests - please allow us to configure it to bigger timeouts, for this is reaaaaaally problematic when it comes to API calls to GPT… I’m currently trying to build some long-text treatment solutions, and well, the timeouts occur rather frequently. Is there no way to extend it easily…?

Check this out and stop making traditional api calls