I have a big problem with the AI agent, and it has nothing to do with AI.
The feature is built on top of a shoddy foundation: the current undo/redo feature. It is unreliable and cannot be trusted. Even if I believed that the AI agent only made the changes it’s supposed to, I would never use it because undo might not undo the changes reliably. There is a “changelog” feature that is only accessible on higher tier plans (really bizarre), and I’m not sure if that’s even reliable enough to trust. [If you’ve used this new “changelog” feature, let me know if it fixes the inherent issues being discussed here.]
Is there work being done on this front to ensure that undo/redo are always 100% reliable? Because as is they can result in extremely buggy behavior and lost work, and an AI agent making changes on top of this would only magnify the preexisting issues.
Are you simply asking if after AI agent makes a change will undo/redo buttons function?
I can tell you the quickest answer: test it. If it doesn’t, then complain in the forum about it with a title properly reflecting the complaint, like ‘undo/redo buttons function bug after AI changes’ and before, be sure to submit a bug report to attempt change for good of community properly. @georgecollier doesn’t like complainers in the forum.
I felt I had a pretty good grasp on what the post was about. An underhanded way, via rhetorical question in title, to attempt to draw attention to a criticism you have regarding the undo/redo buttons functionality and perhaps a larger gripe around heavy investment into AI features at expense of overall core product functions as evidence by the tagging of the co-founders.
This dissatisfaction with lack of core product enhancements can be felt by many and was addressed by Emmanuel in the Dec4 2025 AMA.
But, if I’m off base with my understanding of the post, I can only assume is is just that you’ve encountered a bug or perhaps a perceived limitation in the undo/redo button and used the dramatic effects of a rhetorical question to highlight your concerns as I originally replied to.
This all sounds very hypothetical. Have you had a chance to actually give the AI agent a go recently? It’s getting worked on every week so improvements even month over month are noticeable. If you run into issues with it please share and I’ll let the team know
The undo/redo button plain does not work when it comes to transitions and styles. There are known bugs that occur if you use it several times in a row. It’s known to be unreliable. Has nothing to do with my perception.
I genuinely don’t think he was making a good-faith post. The non sequitur callout of George and the tone of his comments made that obvious (e.g., calling my post “odd” and “underhanded”).
I’m waiting for it to be released for non-AI apps first, and I don’t really have the bandwidth to create a new AI app just to play around with it. But I’ve seen videos of it being used. My main concern is not so much with the accuracy of changes being made (I know this is being worked on) but rather the observability and the fundamental undo/redo architecture underlying the agent (unless the undo/redo system for the agent is using completely new architecture unrelated to the current one).
I have heard many horror stories of AI agents deleting parts of codebases, databases, etc. and the scary part to me is that those same things could happen in Bubble but without me even knowing about it due to the lack of observability and robust changelog.
The goal of this topic is to hopefully get ahead of these issues before a bunch of angry forum posts/X threads are made in a few months after the agent goes into production.
Yes, I did say that, but not in my first post which you commented on in such a way that you stated if I don’t understand a post not to reply. You have to understand though, some people are genuinely here to help, so even if I’m confused, my posting as such via questions for clarity, should still be welcomed.
I never experienced it as after I add a style I just remove it as for me it’s simpler and more intuitive than attempting an undo since my property editor is still usually open just after the style or transition implementation. I also typically test these immediately as testing while building is like second nature for me. I suggest you consider the approach.
Please share because I am unaware of any and have not experienced them. If you have bubble support reference to a ‘known limitation’ that is fundamentally different from ‘known bug’ and usually what I’ve seen from bubble is that actual bugs get fixed, but a known limitation may not be expanded upon.
Again, I personally do not know it to be such. I’ve been using it daily for 8 years. I do know it’s limited to the last 50 actions which for some users it can be confusing, as it was for me, when I found out each section of a dynamic expression is its own action.
Any issues you spot, do as I would do, record a video and submit a bug report. It helps not only you, but other developers and Bubble.
AI changes should be 100 percent transparent and observable. After each change, I should be able to see the before-and-after in UX/UI terms. Cursor did that greatly. I can always see what changes were made to reject/approve them.
For now, there is no way I can trust Bubble’s AI Agent without being able to set rules/constraints, as it interferes with my app’s JSON.
But I believe Bubble will get there anytime soon. It’s just a matter of time.
Yeah, when I was a beginner bubbler I didn’t understand bugs reported by experts either.
You have to record that, demonstrate it as reproducible to get a bug fix
Well. In fact, the bug report form, especially the video upload, came to light due to my efforts working with bubble support to get better support after exasperation around failures to deliver what I saw as adequate support around 2020. Both I and support agents communicated around how to improve their ability to understand the bug report and experienced behavior and landed on a need to have video or at least photo explanations to help them.
I’ve had probably a hundred or more bugs fixed in the 7.5 years I’ve been building on bubble. Of course, not every thing I perceive as a bug is considered one by bubble, a recent example being properties on reusable elects if an option set not having the filter constraint of ‘this options…’ available in filters as it is everywhere else. Bubble views it as a more technically challenging improvement to make rather than a bug, but that doesn’t stop me from trying to improve the product for the whole community by properly reporting bugs I find, or shortcomings I perceive as bugs, especially as I build more complex features.
I get where you are coming from, repeatedly clicking undo button breaks it, it doesn’t work on option sets. And the gaslighted feeling, unsure feeling on stuff like this.
What caused this is simply some buttons and actions are not reliable, I don’t have this problem but “optimize application” button doesn’t work for almost everybody, and there is some checkboxes that doesn’t do anything like “this app is native”. So, overall in the editor there is significant traces of unreliability.
I am a power user but still I click undo button with delay, turn off issue checker when pushing to live ( even when there is no errors) and still save to history before optimize application, just incase.
It is like iphone vs android, you trust apple to not break, where as on android you expect breaks but very experimental new stuff, growth. Bubble is now neither.
How to fix that, well of course very robust product is the only solution, removing unnecessary stuff, make buttons actually work reliably, and improve overall native experience.
Don’t get me wrong but when I click an element it pushes the page to top without any padding, so when I click an element I hit my head to the ceiling, that alone make me question things.