The Bubble AI Agent now builds workflows

Hi everyone,

My name is Rutvij, the PMM for AI at Bubble, and I wanted to let you know that workflow generation is now available in the Bubble AI Agent.

What’s new

The Agent can now create workflows for common triggers like button clicks, form submissions, page loads, and user interactions. It handles actions including data operations, email sending, navigation, and user account management.

For example, as you’re building a login and signup flow, you can click on the group containing the inputs to give the Agent context, then ask the Agent “Please add a workflow that signs the user up given these inputs. Be sure to include the password confirmation and full name field.”

Of course, if you write this prompt, you’ll need an existing Full name field on your User data type.

And since everything the Agent generates is built with Bubble’s visual editor, you can always see exactly how it works and make changes directly when you need more control than prompting can provide.

A few prompt examples from a test app, a Goodreads clone

  • Add a workflow to this button that takes me to the index page

  • When the log out button is clicked, log the user out

  • Add a workflow that, when the Add Book button is clicked, ads a new Book to my database with the values of the inputs in this group

Workflow generation is available now in any web app built with Bubble AI. Check out the updated documentation in the Bubble Manual.

Share what you build

We’d love to see what you create with workflow generation and hear your feedback. Drop your examples or thoughts in this thread below.

— Rutvij

9 Likes

very good !

How can I get access to the AI for an old app built the old school way

2 Likes

Realistically, shouldn’t the agent be able to add the field if it doesn’t exist already?

2 Likes

It’s time for a separate forum category for “AI-built” apps, (similar to mobile apps).

Right now it’s hard not to get excited by announcements like this, only to read through them and realize they don’t apply to real apps at all…

4 Likes

soon come :wink:

1 Like

totally hear you, we’re working hard to remove this limitation!

Furthermore, the agent could create an append action that combines the first name with the last name. :smiley:

@rutvij.bhise can you give some more insights in how this works? For instance, is it real llm intelligence or rather prebuilt workflows like a template that ai understands?

For instance, can we also add a field “funny first name” and “serious last name” and will it still work?

This question keeps coming up, and I think it’s because there’s no clear answer to a more fundamental one (and if there were, more people might actually use the AI builder):

At what point does an “AI-generated” app stop being an AI-generated app?

Is it after the first manual workflow edit? After schema changes? After renaming X number fields?

If those boundaries were explicit, I could choose to stay within them when it makes sense…

Hey @sem - it will work in that scenario with fields Funny first name and Serious last name. I made a quick Loom demo so you can see it in action.

Think of the Agent as a way to save you time on Bubble. The Agent is not just delivering prebuilt workflows. It builds a bespoke workflow in your app and it has the context of your app.

Check out our manual for the full list of events and actions that are in scope.

The Agent is enabled on any app generated with Bubble AI. If you want to create one, click on the “Create with AI” button on the home page after logging in.

No matter how many changes you make to that app after generating, the Agent will stay enabled.

And as @cory.bishop said above, we’re working hard to remove this limitation and expand access to other kinds of apps!

That’s a really important clarification, appreciate you calling it out. :clap:

Knowing the Agent stays enabled even as the app changes is a big deal, and perhaps should be highlighted/pinned since it answers a lot of the “when does this stop working?” concern.

I’m curious how well the Agent holds up as the app moves away from the original AI-generated setup. Things like schema refactors, renamed fields, or more layered conditionals. Does the Agent continue to reason well within the app or does its utility diminish in such situations?

A good question @code-escapee and thanks for the feedback on highlighting the clarification.

As with any LLM, the more constrained and specific the context, the stronger the performance. We see the best performance on small to medium sized apps, even if you have multiple fields renamed or many data types added / removed.

The nice thing about building in public is that we get feedback quickly from our users on what can be improved for the largest apps and we get to immediately slot it into our roadmap.

1 Like

Is it possible for the agent to introduce errors into an app that aren’t covered by the issue checker? Has it ever broken an app? What happens if it generates malformed syntax for Bubble’s JSON language?

Renamed fields is probably fine, bubble will need to use same approach as the bubble api for use field names instead of id. Bubble doesn’t change the ID when we change field name. The network response we see from search shows ID based on original name.

When that is changed and users says a prompt like change field value to input value, the AI agent I imagine right now likely creates a new field because, I imagine right now bubble has it looking only at the ID and not current field name value.

@rutvij.bhise does bubble look to field name or ID?

I interpret the questions as two…

  1. Does the bubble issue checker ever not pick up issues - yes, even those not done by AI
  2. Will the bubble AI agent work like all other LLM based AI systems - yes, I believe bubble uses an LLM provider, so is prone to add 45% more bugs than a human would…newest research shows code by AI introduces 45% more bugs and security issues and takes developer 19% more time. This is not a Bubble issue, it’s just an issue with LLMs.

The 45% number keeps getting thrown around, but it’s pretty misleading even in normal coding discussions.

That stat comes from studies where LLMs are asked to spit out raw code, often by non-architects, with no real constraints, no review, and no surrounding system design. In a lot of cases it’s measuring security issues, not general bugs, and its analzying raw output with the underlying assumption the output would just be pasted straight into a codebase. That’s not how real life development works.

Once you add basic guardrails and validation that number is nowhere near that. Citing “45% more bugs” as as a given fact is kind of disingenuous.

Finally, for Bubble specifically, those studies aren’t even relevant. The AI agent isn’t writing free-form code. It’s writinf workflows from a fixed set of actions inside a constrained system.

You can be skeptical of AI (and I certsinly am of how Bubble is implementing it) without throwing around a scary-sounding number that doesn’t actually relate to the AI system being discussed.

3 Likes

It’s writing json, needing very specific IDs for lots of integrated parts. Workflow event needs an ID that is associated with page ID or reusable. Each action needs events IDs, plus ID of data type and each field.

If the LLM decides to stray away from all documentation and guardrails, which they do, and one of those IDs is off, or a dot notation of the data field type is off, or the JSON gets injected into wrong part of overall app JSON file, those would result in bugs or issues.

All AI systems are the same, based on one or a combination of a handful of LLMs, all of which have seen enough data from past 3 years to have a distilled set of known limitations. Bubble didn’t create an AI, they used an LLM to put in place documentation and guardrails.

I didn’t spend time analyzing the details. Just have seen it referenced in so many videos by experienced programmers who had their own experiences with LLMs that they use the studies to demonstrate their experience is apparently an average experience.

I use LLMs for coding, and it doesn’t matter how much detailed documentation, scripting, guardrails or whatever I put into my prompts, I still often get wrong answers, missing necessary functionality…there is a reason why leading experts have started to come out in larger numbers stating LLMs are about as good as they are going to get now, and companies are hiring developers in larger numbers now.

I’m just saying, Bubble AI is based on an LLM, all known pitfalls of LLMs will be inherent in Bubble AI Agent.

@rutvij.bhise thanks for your response. I also had a look at the manual. So in order for this to work the user needs to setup the appropriate fields.

can you tell a bit more how Bubble moves forward from here? It seems that AI works as long as you know how to setup some basic structures like fields. And I see that ai can setup the fields in a separate process but then you need to understand that you have to do that first because AI is not yet capable to do the login and setup the fields using a single prompt.

I am asking as it feels like the AI will probably empower Bubble super users but is not capable and will not be capable of empowering mum and dads trying to build something.

Love to hear a bit more which direction Bubble is taking with AI and within what timelines.

Great question and appreciate the thoughtful feedback! The agent can create and modify your data schema including types, fields and privacy rules.

In terms of our strategic direction is absolutely toward making AI more accessible to everyone, not just power users. A few things we’re actively working on:”

  1. Smarter multi-step planning / complex edits: So the agent can handle “build me a login flow” end-to-end with UI, workflows, and data.
  2. Better education / guidance: Helping users understand what’s happening and why, so they learn as they build.
  3. Continued capability expansion and hardening: closing key gaps and improving the foundational feature set (editing UI, workflows, and data)
  4. Extending access to non-AI apps: we want to give everyone access for new and existing apps and working hard to remove this current limitation.

Keep the feedback coming, it genuinely shapes our priorities!