OK, but how about being a little bit more practical? The median 2025 app is going to be stringing together a bunch of LLM calls to achieve some functionality, right? And right now actions in a backend workflow always run in sequence and wait for the previous LLM call to finish before going to the next LLM call (even if they don’t depend on each other). And each workflow is capped at 5 minutes.
Do you see how the below screenshot is a problem? Despite the meme of “Bubble.io is built on bubble,” you get to run a generation for 7 minutes, but I don’t.
What are you trying to optimize for? Have you built an AI app recently? Do you know what problems an AI app builder is experiencing in 2025? Or are you building a product that is intended to build a 2016 app?
Or maybe I’m just a “power user” and I’m asking for something extremely weird. Would love to know!
