How To Think About Solving Complex Bubble Problems

A while back I posted about how to build a multi-step AI agent in Bubble. That’s useful, but what’s more useful, I think, is how you can land on that solution.

It’s about how to think when you hit a complex problem in Bubble – especially when Bubble doesn’t quite give you what you want. This is a skill we look for in our team, and also something that’s hard to teach and learn.

I’ll use a multi-step AI agent as the running example, but the goal of this post is to show a repeatable way of reasoning about complex Bubble problems:

  1. Start from non‑negotiable requirements.

  2. Translate them into hard constraints in Bubble.

  3. Let those constraints define the “ground rules” of your solution.

  4. Make early decisions that keep things modular and easy to extend later.


1. Start from non‑negotiable requirements

Ok, we’re going to build a multi-step AI agent! Here’s the goal:

For our agent, let’s say the functional requirements are:

  • The logic is reliable (doesn’t silently break).

  • It can be run in the background (no dependence on a user’s browser).

  • It’s modular (each part can be changed without touching everything else).

  • It’s maintainable (future-you or teammates can reason about it).

Most people jump straight to “which plugin / which API call / how do I loop?”

Instead: treat those four bullets as hard constraints and ask:

“Given these constraints, what can’t I do in Bubble?”

Answering that question narrows the solution space very quickly.


2. Turn requirements into Bubble constraints

Let’s translate the above into concrete Bubble “rules” that shape the architecture.

a. “Can be run in the background” → must be backend solution

If the agent should work when:

  • the tab is closed

  • the workflow might take minutes

…then front-end workflows are disqualified.

That immediately gives us our first ground rule:

Rule 1: The agent loop must live in backend workflows, not the page.

This is also important for maintainability. Most Bubble AI agents are crippled by relying on front-end plugins. How are they going to implement background AI agents that can be scheduled, or run in parallel? They can’t.


b. “Multi-step” and agent-controlled conversation length → must be a loop

A multi-step agent:

  • decides when it’s done

  • at each step, can either:

    • return a final answer, or

    • call a tool and then continue

That’s logically a loop:

  1. Call AI with conversation context + tools.

  2. If it’s done → stop.

  3. If it wants a tool → run tool → go back to step 1.

In Bubble, a “loop” in the backend usually means:

Rule 2: Use a recursive backend workflow (a workflow that re-schedules itself) for the agent.

Now we know we’re building:

  • a backend workflow that:

    • calls the model

    • decides whether to stop or re-schedule itself

  • and some way to store the Scheduled workflow ID so we can cancel it (more on that later).


c. “All messages belong to a conversation” → data model

You’ll often feel stuck until you pin down the data model implied by the behavior.

Here, the behavior is:

  • There is a conversation (thread).

  • That conversation has messages (user, assistant, tools, etc.).

  • The agent’s logic always runs within the context of a conversation.

So:

Rule 3: We need at least two core types:

  • Conversation

  • Message (linked to a Conversation)

Everything else (tool calls, models, etc.) can be layered on later, but these two are non‑negotiable.

This thought pattern generalizes:

“What are the persistent nouns my behavior keeps referring to? What kinds of things (ha) am I dealing with?”
→ That’s your data model.


3. Make tools extensible and modular, not rigid

We don’t just want tools that work once; we want tools that:

  • can be filtered by context,

  • are easy to add without rewriting a giant workflow,

  • and are optional based on conversation state.

If we just hard-code tools in a single massive workflow with tons of conditions, we’re locking ourselves into pain later.

Instead:

Rule 4: Represent tools as an option set (or similar configuration), not a pile of “Only when this tool name” conditionals scattered everywhere.

For example, an AI Tool option set:

  • name (what the AI calls)

  • schema (JSON tool definition)

  • isEnabledInContextX flags, or:

  • category, requires_auth, etc.

This lets you:

  • filter tools passed to the model based on conversation properties, user roles, etc by passing the tool schema in the API call to your AI provider

  • add a new tool by:

    1. adding one option

    2. adding one corresponding custom event

No existing tool logic needs to be touched.


4. Isolate tool execution logic into modular “functions”

When the AI chooses a tool, what actually happens?

Conceptually, each tool execution is:

  • an isolated function:

    • input: arguments from the AI (+ maybe conversation/message context)

    • logic: does something, or reads something

    • output: a result (string / object) to feed back into the AI loop

This pushes us to:

Rule 5: Each tool’s logic should live in its own custom event / backend workflow.

So we get something like:

  • A router workflow: “Use Tool”

  • For each tool:

    • a dedicated custom event:
      Run Query Knowledgebase
      Send Email
      Create Ticket
      etc.

Why this matters:

  • You can reason about each tool in isolation.

  • Bugs in one tool don’t contaminate others.

  • Adding or editing a tool doesn’t require editing some fragile 200-step workflow.

Again, this is a general design principle:

Push variability into small, composable units instead of large, long workflows.

We end up with this:


5. Work with Bubble’s limitations (example: JSON)

Bubble doesn’t have a native “string → JSON object” feature in workflows.

For tools, that matters because:

  • The AI returns tool arguments as a JSON string.

  • Each tool has a different schema.

Therefore:

Rule 6: Accept that Bubble can’t parse tool-argument JSON natively, and design for it.

One clean approach:

  • For each tool, define a dedicated API Connector call to your own backend:

    • input: raw JSON string of arguments

    • output: parsed JSON in a stable shape Bubble can work with (result of step X’s {api call result}

The general pattern here is to find the hard limitation (here: JSON parsing) and accept it early.
Wrap it in a consistent abstraction (here: per-tool API call) instead of hacking around it all the time.

![image5]
![image6]


6. Design the UX behavior into the architecture

Another requirement:

“We need to show a loading/‘thinking’ message before the reply is generated.”

If you bolt that on later, you’ll end up with confusing conditions everywhere.

In addition, it needs to show quickly. We don’t want a user message to wait a few seconds before appearing in the UI. Therefore, the message creation must be in the front-end, and then we schedule the agentic loop (note that we can also make our logic for kicking off the loop support both message creation in front-end, and in the agentic loop when kicking off background agents).

Therefore:
Rule 7: Create the user and assistant messages together upfront in the workflow, in the front-end.

So the sequence becomes:

  1. User sends a message.

  2. Create user message and assistant message (isGenerating = yes)

  3. Frontend immediately has a “loading” assistant bubble to display in the last repeating group cell for this conversation’s messages.

  4. Once the AI responds or the tool completes:

    • we fill in the assistant message content

    • set isGenerating = no.

Now the UX requirement is a first-class part of the architecture, not an afterthought.


7. Control and cancellation: don’t forget lifecycle

We also want:

“We need to be able to stop the agent.”

That forces another design decision most people forget at first:

Rule 8: Store the Scheduled workflow ID for the recursive agent in the database (e.g. on Conversation).

Why?

  • Each time the agent re-schedules itself, you save that scheduled ID.

  • A “Stop agent” action can then:

    • cancel the scheduled workflow using that ID,

    • clear or flag the conversation as “stopped.”

Again, the thinking pattern is that avery long-running process should have:

  • a way to start

  • a way to observe state

  • a way to stop


8. Future change: async tools with user input – was our design good?

Now, suppose a client later asks:

“Sometimes the agent should ask the user follow-up questions and wait for the answers before continuing.”

This is a classic test of your architecture.

Do we need to shove more conditions into our existing agent loop? Rewrite half the logic?

If we’ve followed the rules above, the change is surprisingly straight forward.

We can:

  1. Add a field/attribute to the AI Tool option set:

    • isAsynchronous (yes/no)
  2. For a tool like “ask_structured_questions”:

    • mark isAsynchronous = yes
  3. In the tool router :

    • when a tool is async:

      • create the questions as messages / UI elements

      • do not immediately reschedule the agent loop

  4. When the user finishes answering and clicks “Complete”:

    • we treat that as the “tool result” being ready

    • then re‑kick the agent loop with updated context

Note what we didn’t have to change:

  • the core recursive loop structure

  • the notion that each tool is a discrete “function”

  • the message/conversation data model

All because we made earlier decisions that:

  • centralized tool configuration (option set),

  • isolated tool logic (per-tool custom events),

  • and respected Bubble’s backend + scheduling model.

This is the payoff of designing with constraints + modularity in mind.


9. A mental checklist you can reuse for other problems

You can apply this thinking style to almost any non-trivial Bubble feature.

When you’re stuck, walk through:

  • Requirements. Are there any hard requirements which will constrain how you do something in Bubble?

  • Modularity and variation. Is there any logic that’s similar that should be consolidated? What might grow in number over time (tools, actions, role?)

  • Platform limits. Does Bubble make something hard? If it does, how can I simplify it or make it a consistent pattern so that it’s not a pain in the ass every time?

The multi-step AI agent is just one example, but the same thought process works for any complex Bubble problem.

And, courtesy of Nano Banana:

14 Likes

The one thing I use a lot that is not defined here is, to write a story some sort of a scenario, from the POV of the user, this is especially useful in the beginning and at the end of it. At the beginning for the feature to be more comprehensive and at the end for users to not break it.

1 Like

@georgecollier — this breakdown is genuinely great.

But reading the architecture end-to-end — especially the diagram you included — what really stands out is that this isn’t “no-code” anymore in any real sense. Your flowchart ends up being a full software architecture diagram — loops, backend logic, routing, scheduling — the whole deal.

And what really surprised me is that almost every one of those boxes and arrows exists to work around a Bubble limitation:

  • Bubble can’t loop → so we schedule recursive workflows
  • Bubble can’t run tools as functions → so we build an option sets
  • Bubble can’t parse JSON → so we send it to an external service
  • Bubble can’t stream → so we manually fake it
  • Bubble workflows don’t manage themselves → so we store/cancel scheduled IDs
  • Bubble doesn’t support modular logic → so we make tons of custom events

Your writeup is methodical and clear — but almost all of that effort is about navigating Bubble, not building the actual agent.

And here’s the part that hit me personally:
I don’t write code. At all.
But just out of curiosity, I rebuilt the same multi-step agent in a tiny Next.js project… and it took minutes. No scheduling tricks. No workarounds. No JSON hacks. Everything just works straightforwardly.

For context: I built this demo without using CLINE — and CLINE is pretty much to AI-assisted coding what Bubble used to be to no-code 5–7 years ago.

Here’s the demo I built and deployed:
:backhand_index_pointing_right: Simple Agent demo

Which leads to what I think is the bigger point:

Bubble is starting to feel like a UI wrapper, while modern IDEs, AI coding tools, and lightweight frameworks now make this exact type of logic far simpler and faster to build directly — even for people who aren’t coders.

And when the architecture required in Bubble looks like this, with this level of planning, it becomes clear that this isn’t really aimed at the “I don’t understand code” crowd anymore. If someone can fully understand your diagram, they’re already thinking in developer terms.

At that point the question becomes:

Why keep fighting Bubble’s constraints when the tools we have today let you build the same thing directly, more simply, and in far less time?

Your post is a great illustration of the planning process.
But the fact that we need this level of architecture to build a simple multi-step agent says a lot about whether Bubble is still the right tool for these kinds of workflows. And weirdly, the learning curve in VS Code now feels lower than the learning curve for Bubble’s backend.

7 Likes

You would do a similar approach in traditional development (i.e making tools modular). The abstraction is slightly different, but the principle is the same.

What? We’d do that in traditional development too…

What? A custom event is just Bubble’s abstraction of a function. Bubble does support modular logic.

What you (or ChatGPT) have actually outlined is that the principles of development are the same in traditional development and Bubble. They’re just expressed slightly differently.

This is true. But there are hundreds of successful Bubble apps that need people that do this. The fact remains that senior Bubble developers are in high demand. Could a technical Bubble user now build faster using AI-assisted coding? Quite possibly.

1 Like

Modularity absolutely exists everywhere, obviously.
But the amount of structure you need in Bubble to express basic modularity is dramatically higher.

In a modern setup (especially with AI tools assisting you), a “tool” is something you define once and use directly. In Bubble, the same idea requires option sets, custom events, JSON parsing, scheduling logic, etc. The principles may be the same but the development overhead is not.


Only partially.

Most modern platforms handle background tasks, lifecycle management, and cancellation natively. You don’t manually track IDs for loops…

Bubble is unique in that it requires the developer to store scheduled IDs and cancel them manually and so on.This isn’t a general software development rule…


Custom events do provide organizational modularity in Bubble, but they are not equivalent to real functions.

Bubble custom events still lack many of the properties that make functions uableas modular building blocks. In other words Bubble supports organizational modularity, but not functional modularity.


I agree completely that the principles overlap. Logic is logic.

The issue is the friction involved in expressing those principles.

In modern tools (especially paired with an AI assistant), expressing a loop, a tool call, or a background step is extremely direct because they are native

In Bubble, each one becomes a workaround that you have to manually build
So the principles of development may be the same, but the cost of expressing them is very different.


I hear you, but the fact that someone has to go into full developer mode just to follow that diagram is exactly my point. And the heavy demand for senior Bubble developers doesn’t really tell us Bubble is growing. It mostly reflects how hard it is to untangle and maintain complex backend workflows once an app gets big. That’s not a strength. It just means Bubble creates systems that only specialists can keep running.

If Bubble didn’t require so many workarounds to build anything even slightly complex, there wouldn’t be this constant need for highly specialized developers to maintain older apps. And once someone is already thinking at the level your diagram requires, the path of using AI-assisted code has become faster AND easier than staying inside Bubble’s limits.

A lot of those “successful Bubble apps” you mentioned are now tough to expand or maintain because they were built around Bubble’s limitations. Many of them do end up getting rebuilt when they want to add real features, and now that kind of total rebuild can be done in a matter of days or weeks.

And on the AI-assisted coding point: if someone understands the architecture you outlined, it isn’t “possibly” faster to build with AI. It is definitely faster. The amount of workaround and manual structure you need in Bubble for something like this just doesn’t exist in today’s tools.

So yes, senior Bubble devs are in demand, but much of that is for keeping older apps alive. It’s not because Bubble is the quickest or simplest way to build this kind of functionality today. And that’s really the whole point. Once a user is already thinking at the level your diagram requires, Bubble isn’t the easier option anymore. The alternatives have become quicker, simpler, and just better. Oh yeah, and of course a tiny fraction of the cost…

1 Like

A tool is just this… it can’t be much simpler no matter where you’re defining it.

Not really. They’re tough to maintain because the founders didn’t know how to build around Bubble’s limitations, or were built in spite of Bubble’s limitations when they never should’ve been built on Bubble. Same’s gonna happen if you vibe code on a stack you don’t understand.

I mean, we’re getting into specifics here, but I promise you that every tool will require you to pass the ID of a scheduled job you want to cancel. Else you’re cancelling ‘something’ and not saying what :slight_smile: Where do you get that ID? By storing it in your database… of course you can stop it by ending it inside the function rather than cancelling the run altogether, but you can do that in Bubble too so not really sure where you’re trying to go with this.

@georgecollier I don’t think that’s accurate. In most modern environments, developers never pass job IDs around by hand. The platform manages job lifecycle internally. You cancel a job through a method or through normal control flow, not by shuttling IDs between steps.

The reason Bubble forces you to store the ID, pass it through workflows, and cancel it manually is because Bubble doesn’t have native job lifecycle management. There are no background workers, no cancellation tokens, no built-in queues, and no native async loops. So the developer ends up building all of that plumbing manually.

When I spent 10 minutes yesterday to build this multi-step agent outside of Bubble, I didn’t touch a single job ID, and I didn’t build any scheduler logic at all. That was all handled automatically and is exactly the point Im trying to make.


@georgecollier , I get what you’re saying, but the screenshot you shared is just the definition of the tool. Of course that part looks simple. The JSON schema is the easy part in any platform.

The complexity starts the moment you try to actually use that tool inside the agent loop.

Your other screenshot shows what I mean:

  • multiple parameters
  • multiple return values
  • branching conditions
  • API calls
  • Return data steps
  • routers that decides which tool to run

and all of this has to connect back into the recursive backend loop

So I’m talking about the JSON definition and whether it needs to be simpler.

Im talking about all the steps supporting the tool. None of this plumbing exists in other stacks. You define the function once and call it. The system handles execution, routing, scheduling, and data passing for you.

All these extra steps only exist because Bubble is missing almost every native building block modern apps rely on: real functions, real loops, real streaming, and real JSON handling. Without those, you end up stitching together a ton of scaffolding just to get basic behavior working.

Sometimes that’s true.
But the setup you posted is complicated even when you do everything right. Bubble simply wasn’t built for the workflows modern apps need today. Anything beyond straightforward CRUD grows into layers of workarounds that get messy fast, no matter who is building it.


The real point

Once you’re thinking at that level (of understanding your architecture) , AI-assisted code is not just “possibly” faster.
It’s definitely faster, easier and cheaper because you stop building all the missing plumbing yourself.

Bubble works well for basic CRUD. But once you try to build something like this, the backend limitations don’t just outweigh the benefits. They wipe them out entirely.

1 Like

This is exactly the same as in traditional development I’m afraid :rofl: The grass is not always greener.

I’m sorry, but you’re giving this right now:

For avoidance of any doubt….

2 Likes

@georgecollier, you keep responding as if the argument is about whether schedulers or IDs exist. Obviously they exist. Obviously something tracks them. Obviously something advances the loop. Obviously there are 0s and 1s flipping somewhere — sure.
But that has nothing to do with whether the developer should be the one wiring all the machinery on top of it.

The point is who has to wire that machinery.

In normal stacks, it works like pressing the gas pedal:
the runtime handles the ignition sequence automatically. Pistons, combustion, timing — all built in.
The developer never thinks about it.

Bubble, instead of giving you the pedal, hands you the ignition parts and expects you to wire it all together, and then says: “Now track the job ID yourself, store it, pass it between workflows, cancel it manually, guard against duplicates, and hope the loop doesn’t run away.”

So when you promise that “every tool requires passing the job ID,” sure — in the abstract. But it skips the only thing that matters to someone actually building the system: who has to do that work.

Bubble makes the builder do the work the platform is supposed to handle.

:pushpin: Why your meme doesn’t land

The output is the same.
The effort to get there is not even close.

Here is the accurate version:

Same outcome. Completely different amount of effort.


:pushpin: And here’s the simple reality you keep sidestepping

In Bubble, you spend your time building plumbing: routers, schedulers, job IDs, cancellation logic, JSON parsing workarounds, recursive backend workflows, optimistic UI wrappers, and the glue to keep all of them consistent across workflows.

In my Next.js version, none of that even enters my brain. The platform gives me loops, functions, JSON parsing, execution flow, runtime guarantees, and async tooling natively.

So the actual “tool” in Next.js is literally:

async function browseWebTool(args) {
  return doSomething(args);
}

Not because the logic is different — but because the platform handles everything Bubble forces the user to build manually.

Bottom Line: Bubble works well (sometimes even very well).
But the moment you need anything beyond CRUD, it stops being “no-code” and starts becoming “re-implement the backend yourself.” → At that point, the simpler and faster path is obvious.

This is my #1 complaint about Bubble. Everything needs a convoluted workaround if you’re trying to build an app for 2025.

It’s fine if you want to build a 2016 app, but for a 2025 app it’s just painful.

1 Like

Thank you for this @randomanon , really.

I was beginning to believe that “you can eventually get there in 14 steps” is the same thing as “the platform already gives you the feature.”

Two tools producing the same output doesn’t make them equally sane to build with.
Glad someone else sees the gap.

1 Like

There’s still a lot of learning to be done in order to ensure that your code written by AI:

  1. works as expected
  2. is not going to overflow your stack
  3. is not over engineering your intent

I do not disagree with you in regards to how Bubble requires weird workarounds for simple programming. Yet it’s very naive to assume that just because AI can write you code, you won’t need to know how the libraries you use for production, work and function.

This is a very narrow view of IT infrastructure. IT infra selection is very broad. It’s not just about “what is faster”. You have to work with a budget, a timeline, team expertise and objectives. Scale your AI-assisted-code apps and then compare to what it would have cost you (not just financially) to have just built it in Bubble.

I love AI for coding because it’s an enabler. I code all of my external pipelines, I’ve built 2 simple web apps deployed on Cloudflare workers and they all work flawlessly. I’m also really digging Google’s Antigravity for a fun little side project. Yet…I find it so much easier to just build and deploy in Bubble.

I don’t worry about client - server security, the APIs, the token management, I don’t have to juggle 5 different dashboards just to diagnose issues, I don’t worry about getting invoices from 10 different services and I don’t have to be a mathematician to calculate my costs. AI does not remove ANY of that friction and it never will.

1 Like

Every year we are straying further and further away from “God’s light.” The purpose of Bubble as stated by @emmanuel was for his grandma to be able to build an app. Yet we are on the trajectory of Bubble becoming more complicated than code (outside of the 99th percentile of coding complexity).

We can no longer make any excuses for the lack of native parallelization/loops.

New users will always take the path of least resistance. At this rate, Bubble will only exist to serve legacy customers in a year’s time.

Arguably this is what Replit (and others) are trying to figure out, and will eventually succeed in doing so.

1 Like

Maybe. Though not with LLMs. LLMs are too indeterministic to be reliable enough to oversee deterministic systems. Not taking into account how expensive it will be.

In theory, models trained to work with different parts of software development are better choices in the long run. AI in software development is already moving towards specialized models, but the economics are not making it viable.

If you examine the requirements to achieve this, Bubble is already in a good position. All they have to do is let us write and run custom code outside of plugins.

1 Like

Hmm…what you are saying is that AI-assisted development is cheaper than traditional software development. It does get cheaper (financially) when you shrink dev teams, but you really should take the time to understand the actual costs of software development.

Another reason why you should do the research and the math on how much it costs to scale a software, especially cloud-based systems. Friction isn’t about “the feels”, it’s about objectives, sustainability and continuity.

Everything in building software has friction.

Frameworks have friction: syntax, functional limitations etc.
Deploying AI has friction: You still need to write the correct prompts and provide the correct context. As requirements get complex, you need to engineer a pipeline that not only maximizes the models you want to deploy, but you’ll also need to manage the potential costs and reliability.

The only scenario where no friction exists is when you get execute just by thinking it.

Moore’s Law sat in naughty corner years ago. It’s simply an observation of a trend that does not apply today. We’re looking at trillion dollar investments just to scale AI and maybe, just maybe…we’ll have enough electricity to power “all them AI brains”. Compute increments in the modern era is a result of better ways of doing things. Be it through architecture or hardware specialization.

As mentioned, “friction” in software development isn’t just about the code…your takes on it are uninformed…

I don’t think you understand how LLMs work well enough to understand the limitations and unpredictability that come with it. You can give a human instruction, and they will be incentivized to follow those instructions. LLMs don’t have any of those incentives. They are mathematical algorithms using weights to match prompts against the dataset they are trained on.

In your plane analogy, trained humans still sit in the pilot seats. Humans that require time to train and money to employ.

Code that calls code still needs to be deterministic. LLMs are unreliable inputs.

I have deployed LLMs in a production pipeline. Let’s take the simple stuff: take user input → generate JSON. I spent hours fine tuning the prompts. I added guardrails and it got too complex for my something so simple. Guess what? It fails from time to time with too much variance for me to determine a solution.

That is not deterministic…I had to add band aid the UX so my users think it’s their fault and not the system’s.

Bad code is still bad. Damaging code still does damage. Remember, all it took to take down the entire internet was a single line bad code (I assume deployed by a human) within AWS, Google and Cloudflare.

I’m talking about macroeconomics: Supply and demand, logistical and financial issues caused by geopolitical bru-hahas.

Here you say “cheaper”…1 million is still cheaper than 100 million. Regardless, cheaper inference does not equate to cheaper training.

Have you tried to train domain expert level models? I have been learning to, and fine-tuning models the last few months. Those are already relatively expensive for my business model. Take a look at the capex investments that go into companies that train models and you’ll have a good estimate of what “cheap” really is.

This irks me because I support the statement, but it fails in supporting your rebuttal. You don’t need AI to accomplish what you did. It’s the same as me building and stacking microservices to support my Bubble apps.

Did coding with AI help? Most definitely, cause I hate writing code.
Could I have wrote it myself? Yes. I had a desperate incentive to offload processes to infra that was better suited for the objective I wanted to achieve.

Conclusion

To me your argument for AI-assisted coding is the same as the argument about Bubble versus traditional code. The same pros and cons still apply.

You need to have a broader understanding of software development…understand that AI-assisted coding only alleviates the pains of some parts of SD. The problem with your arguments (and others) about AI versus Bubble is the “eventuality”. Bubble solves multiple problems in software development now.

I can safely deploy a viable monetizable product now and then throw money at Bubble when I scale. Nothing needs to change if it works because I don’t have to deal with the parts that make scaling tedious. Production problems? I just yell at Bubble while I tell my customers “We’re having issues with our infrastructure provider. We’re working very closely with our partners to resolve this issue. Thanks and much love!”.

Then I just wait, smoke some cigarettes and ponder about existence.

On the other hand, I’ve met a handful of founders who got funded with their “cheap” vibe-coded MVP…only to then spend tens of thousands to hire a dev team and rebuild the product. I’m not pivoting the conversation to vibe-code vs Bubble, I’m showing that the realities of software Development still stand. AI assisted or not.

2 Likes

How to solve complex problems in Bubble (or in general :grinning_face: ), or how at least how I solve them? Split the complex problem up into small problems/steps. Solve the small problems and you will solve the complex problem.

That is it, oh and:

  1. search the forum because there is a good chance your question about the complex problem has been asked and answered already.

  2. if you know any experts, ask them

  3. ask AI.

But be careful with experts and AI (and me), they (we) sometimes to hallucinate. :wink:

5 Likes

Well said and thats exactly what AI agentic coding needs as well. To break it up into small steps where it does become deterministic (only so many combination of words you can have in a 100 line block) and the agent is firing on all cylinders… and of course if there are issue it makes it so much easier to debug.

1 Like

Determinism

@ihsanzainal84, This still confuses the generator with the system.

Obviously, I’m not proposing using an LLM as part of the runtime.

An unreliable LLM output during development is no different from a junior dev making a mistake. You fix it, commit, and move on.

Determinism belongs to the runtime, not the tool that wrote the code.
And to be clear — LLM nondeterminism affects only the draft it produces, not the behavior of the running system. Once the code is deployed, the runtime guarantees determinism the same way it does for human-written code.


Macroeconomics

This has nothing to do with choosing Bubble vs AI agentic coding.

Founders aren’t calibrating their stack based on global energy supply forecasts.
They’re choosing the tool that lets them build quickly and reliably.

Even if model training cost doubled, it wouldn’t change one thing about using AI agentic coding to avoid Bubble’s workarounds


Friction

Of course everything has friction.

Bubble’s friction is structural and introduced by the platform itself. That is much heavier friction than worrying about one or two cloud invoices (btw, if youre concerned about cost, teams already use slack bots to track that ).

And claiming AI “never” removes friction just ignores the entire history of computing.


The actual point

This still doesn’t respond to the core argument.

The question isn’t whether Bubble can deploy a CRUD product.
It’s whether it remains the fastest, cleanest, or least limiting option once you need loops, scheduling, streaming and other core building blocks.

These are built-in features in modern stacks.
In Bubble, they require workarounds.

And the “vibe-code MVPs rebuild later” story actually makes the opposite point:
teams rebuild because the constraints eventually choke them. Bubble creates the same dynamic, it just happens more slowly.

Which brings us back to the actual question:

Which approach lets teams build scalable apps without wasting time, money, or sanity on avoidable workarounds?