Has anyone used a reasoning LLM model in their APIs?

I need to use a reasoning model for one of my workflows, but this will take 1-2 minutes. I’ve heard Bubble’s backend workflows time out after a certain period of time. Anyone use a reasoning model before? Would I need to use a 3rd party provider to call the API and then return the info using a public backend workflow/webhook?

Edit, looks like there’s a hard-coded limit:

Workflow timeout

Workflows that take more than 300 seconds (5 minutes) will time out. Note that other processes running simultaneously can lead to Bubble throttling your app to maintain stability if your app comes close to maxing out its capacity. This can sometimes lead to workflows timing out because they are slowed down.

CC: @georgecollier

API calls tend to time out after 60 seconds, with some exceptions for calls to OpenAI. You have to set up your own proxy to send results by webhook :roll_eyes:

In addition, workflows time out after 5 minutes (this can affect some backend workflows that might, for example, try to make changes to or delete lots of things)

cc @nick.carroll allowing users to specify API call timeouts is a big AI enabler (like streaming).

2 Likes

@josh @emmanuel @nick.carroll Increasing the API timeout time is an extremely high bang for your buck low-hanging fruit. Having a 60-second timeout on API calls in 2025 is the equivalent of that 80 year old limestone mine that still processes all government retirements. Test-time compute is the next frontier of scaling. This is the new norm ever since o1 and Deepseek. Even Bubble AI/Assist takes several minutes to provide a response.

AI apps are the #1 use case for people who are getting into building, imagine someone on a forum asking for “which nocode tool to use” and someone responding, “Bubble doesn’t let you use reasoning models.” This is a massive impediment to growth.

On a purely strategic basis, you don’t want me to take 2 weeks to set up a FaaS and do a hacky workaround to get past this issue. The #1 rule of platform growth is to keep users from leaving your platform. YouTube and other sites have warnings that pop up saying, “Are you sure you want to leave?” X/Twitter deboosts links. Apple is well, you know, Apple. The more you incentivize users to use other services to do basic things, the more you encourage them to move off your platform for good. It’s a slippery slope. It starts out as a Cloudflare Worker, then Xano, and by the time you know it, the person has moved off Bubble for their new builds.

Please, for your own sake if not mine, flip a few switches and change the API timeout to 5 minutes. Ideally I’d want 10 minutes for o1 Pro but that might cause complexity with Workflows which time out at 5 minutes. I cannot imagine this causing problems, I doubt there is heavy orchestration around the API timeout response setting. If you’re worried about causing server strain then leave the default to 60 seconds and expose it in the Settings menu.

This is a massive win on multiple levels, it enables Bubble apps to do more impressive things, which increases the chances of a Bubble app becoming successful, which increases the chances of other people finding out about Bubble. It’s literally just finding and flipping a few switches. Please do not say the “high surface area” thing. This is essentially a Bubble Boost. Just do it and announce that Bubble can now use reasoning models in your monthly newsletter. Small amount of effort with very high upside.

Adjust workflow timeout | Bubble

Edit: Apparently you guys have done this before, so it’s possible.

We are looking into it. Thanks for the feedback!

5 Likes

Any chance you guys could flip a switch to make this happen? Even 4 minutes would be huge. Really demoralizing that this hasn’t been addressed in 2 months despite it being a 5 minute push. Just makes me think you guys don’t care. I get that all your efforts are on the mobile editor but this is literally just a few lines of code for something integral to the future of the platform.

You guys use Claude 4 now for AI gen, which 100% adds more latency to the AI generations. Feels a bit hypocritical to not enable your own users with the most advanced models.

@zoe1 Could you flip the switch?

1 Like

I’m currently having to build workaround using n8n where I create the workflow in n8n and call to the n8n API directly from bubble simply because of the timeout issues. Would love an update here.

1 Like

They don’t care. No point in asking. I got trolled into thinking they cared, but they don’t.

Also had to build a hacky workaround to get past this, which puts me another tiny step closer to independence from Bubble!

Pretty much the only way to get around Bubble timeout issues if you want to keep it one way. You could set your AI to act like an agent and call your app’s endpoints but it would require a more complex setup. Though I’d argue that would be better in the long run.

To be fair to Bubble, if they didn’t have restrictions to API timeouts, everyone else in shared cluster will have their apps die because a small percentage of apps have badly optimized setups wanting to connect to reasoning models.

They could have at least made it 4 minutes (still less than the 5 minute workflow timeout).

Technically the processing cost of both are totally different. Waiting for an API response would mean blocking server resources for longer which means less number of API calls can be supported concurrently before the server just stalls and eventually crash.

That also opens a longer window for bad actors to take advantage of. Not good from for security because you won’t know what’s coming from the other end.

In contrast workflows are controlled by Bubble.