New in the AI Agent: Image upload, smarter expressions, clearer responses

Hi everyone, my name is Cory, and I’m a product manager on the Bubble AI team.

I’m excited to share a few updates we just shipped to the Bubble AI Agent (beta) that are all working toward the same goal: closing the gap between what you envisioned and what ends up in your app.

Image upload

You can now drop an image directly into the Agent while you’re building and it’ll use that as a reference.

This came from something we kept hearing from builders. A lot of you already have a pretty clear picture of what you want your app to look like. Maybe you’ve been screenshotting apps you love, designing in Figma, or sketching out rough mockups. But translating that into a text description is its own kind of work, and something always gets lost. Now you can just show the Agent what you’re going for.

This feature is available in the Agent now — so once you have started an app, you can drop in a screenshot of an app you admire, Figma design, a mockup, or any other visual reference, and the Agent will build from it. It’s particularly useful when you want to add a new page or feature and have a clear visual in mind

We’re especially curious to see how builders use this with Figma design exports. If that’s your workflow, give it a try and let us know how it goes.

Smarter expression generation

Expressions are the logic that power your app, making elements dynamic, conditions trigger correctly, and data show up where it should. This update is focused on making expression generation a lot more reliable, whether you’re generating a new app from scratch or asking the Agent to update an existing one.

Here’s what we improved in this update:

  • A wider range of expressions now generate correctly. A long list of operators and data sources — things like Current Language, Current Page Name, geographic position, text operators, date manipulation, file and image operators, and list operators like min/max — weren’t previously supported. That’s now fixed.

  • The Agent understands your app better. It now has visibility into your element types and their states, which makes it noticeably more reliable for conditional logic, element states, and multi-page apps.

  • Conditionals, privacy rules, and component generation are more reliable, too. Conditionals for workflows, privacy rules for data types, and expressions created during component generation all got improvements in this update.

  • App generation also benefits. The underlying reliability improvements apply to initial app generation too, so you should see fewer broken expressions overall when generating from scratch.

The goal with this update is to get the Agent producing the right output first time, with less back-and-forth to get there.

Clearer responses from the Agent

The Agent now communicates more clearly about what it built and gives you better guidance on what to do next, with responses that are easier to scan and are less text-heavy.

Expanding Agent access

Most Bubble Ambassadors (BAMs) and agencies now have access to the Agent. We’re actively working on expanding access to more users and we’ll share more details as that rollout continues.

What’s next

A few things we’re actively working on:

  • Compound editing: the ability to edit UI, workflows, and data all in one request, so you don’t have to break complex changes into separate prompts

  • Agent editing for mobile apps: we’ve started shaping this out and will share more details as we continue to make progress

We’re continuing to invest in making the Agent smarter and more capable, with more updates to come.

Try out image upload and let us know what you build — especially if you have an interesting use case with Figma designs or app inspiration screenshots. And as always, we welcome your feedback in this thread.

-– Cory and the Bubble AI team

11 Likes

Glad to see things progressing in the right direction. AI is moving so fast, I’m sure it’s hard to keep up. I hope you guys can get it to a place where it will be the go to place to build apps. :blush:

2 Likes

I’d like to see the agent accept the export from Google Stitch and just get it done. Basically take the HTML and turn it into bubble, which would allow for Figma and other sources that I believe all provide HTML export features.

1 Like

Good feedback! We don’t have a Google Stitch integration but you could use a screenshot from it and upload to Bubble Agent.

How pixel-perfect does it get?

Are parentheses included in this? This is one of the most important “enablers” for AI-generated expressions.

Also, I really hope the agent can get to a point where I can say “make this page responsive” and it will make everything perfect for mobile. Will be massive.

Finally, this needs to get launched soon because the plugin AI agent is still not even in beta, and that’s the single most important feature for Bubble to survive the AI age. Once that comes out, you essentially get the best of both worlds: clean reliable deterministic architecture + isolated snippets that don’t break your entire app if they stop working.

1 Like

Why already created apps don’t have access to Bubble AI? What are the thoughts about recent Claude removing partners from their applications, like Github, Antigravity and others? It is expected to change the current model when your contract ends?

Not perfect, but it makes building UI significantly faster — and we’re focused on improving it further! It’s a great starting point that gets you most of the way there, then you can handle small tweaks and polish with our visual editor.

Niceeeee. Nice work guys. Excited to try it out

Two great questions..
On existing apps: Bubble apps vary enormously in size and complexity, and we want to make sure the agent performs well on all apps. We’re expanding coverage in stages as we validate quality across different app profiles as well as iterate on feedback to extend it’s capabilities (such as right now we’re working on enabling the agent to edit mobile apps not just web apps).

On model dependency: Our AI layer is model-agnostic by design, our architecture is parameterized to work with other models if/when that changes. We’re not locked into any single provider, and we continuously evaluate alternatives. Hope this helps!

1 Like