Monthly Community Update -- February 2025

Oh, the infamous elephant in the room—but don’t worry, Bubble seems to think if they just ignore it long enough, we’ll all magically accept it. Meanwhile, new users keep getting sticker shock after building their MVPs. Maybe next month’s update will include ‘New Pricing Transparency’… but I won’t hold my breath.

3 Likes

Spotted on LinkedIn:


Interesting to see AI integration qualified to ‘enable AI-assisted front-end development’

2 Likes

I’m deleting this post based on a conversation I had earlier today.

Let’s all hope things get worked out and Bubble succeeds.

2 Likes

Using AI to help code plugins has been game changing for me. Each time I find an end to what Bubble can do, a plugin comes in. My journey has now got me making my own plugins and I am comfortable that while no code lowers barriers, when I need something bubble can’t do, the plugin system picks up the slack.

Keep this focus, make it even easier to code plugins with AI code generation and you’ve got a very powerful ecosystem.

(note: ChatGPT understands Bubble well but it makes annoying mistakes. If this was tighter then plugins would fly out faster and be better, drawing more coders in to build free/private/premium plugins for the community).

3 Likes

When prompting for code to build Bubble plugins, just give it some context on the Bubble function parameters, how they link and how to work with lists. I have those instructions stored in memory it’s been very reliable.

1 Like

Oh, reducing WU consumption? Now that’s a good one! :joy: Given the long-standing WU overcharge bugs that still haven’t been fixed—or even properly addressed on the forum—it’s pretty clear where Bubble’s real priorities are. And let’s be honest, the more WUs we burn, the fatter Bubble’s wallet gets. Why fix a system that’s printing money, right? :money_with_wings::rofl:

Mobile Beta, does it really exist? Feel so let down being “invited” virtually to the new york event and then put on a waitlist that goes nowhere for a feature announced in 2023. I have not been here as long as many of you, but I started my project with expectations of the native app development tool. I’ve had to give up waiting for this feature, but updates like this that show the team is not aware of the let-down to the community in many ways is very disheartening

Why should a conversation make you take back your comment, did Bubble correct you that your concern was inaccurate? Please clarify as no conversation should cause someone’s polite but critical prior comment to be “delete worthy”. Your opinion was shared with the community, what changed it?

Hey all – like @jayvee.nava said above, we appreciate the thoughtful feedback. I want to share our thinking on a few topics, because I know most of the commenters in this thread are deeply invested in Bubble and we try to be transparent.

WU

Two things:

  1. The algorithm / bugs

  2. Observability and monitoring

Starting with the algorithm. We’ve seen @boston85719’s detailed investigation and reports, and I appreciate your thoroughness. For the points illustrated in the majority of these reports, we are not planning to make changes to the WU algorithm in the near future. To explain why, I need to share some context.

For Bubble old-timers, you probably remember that WU replaced Capacity, which was the metric we used for rate-limiting applications prior to WU-based pricing. Capacity was calculated by measuring your apps’ actual CPU consumption on our shared main cluster servers, as well as round-trip times to our databases.

Capacity was dependent on the internals of Bubble’s server software, which are continuously changing as we improve the platform. It was also affected by things like how warm our caches were or how much load our servers were under at any given moment. That meant there was no way for you to predict how much capacity a given operation would take, and no guarantees that it would be consistent from run to run.

Given that, we built tooling for managing Capacity inspired by engineering performance profiling tools, which typically take a top-down approach: You look at the overall consumption of a system and drill down into particularly expensive operations and hotspots, which you can then try to optimize. That’s usually more effective than trying to write each line of code in an “optimal” way. Most engineering teams are trained to write simple code (since simple code is easier to maintain and speed up if necessary) and only worry about the performance of code that becomes a clear performance bottleneck.

When we decided to move to usage-based pricing, we built WU, which is more stable than Capacity because it replaces measurements of clock time with measures of the amount of work actually performed, like the number of bytes processed by a given operation. This results in less variance as we make code changes, but the resulting metric is still tied to Bubble internals. We did publish the list of low-level operations and the weights we assign to them, but how many of those low-level operations we do per user-visible activity (like running a workflow action) still depends on our code implementation. That means there will be variance from run to run, depending on the internal state of our servers as we process each request.

Our WU implementation is fairly simple under the hood. The complexity is in the rest of Bubble’s code and how often it performs the various operations that WU measures. When we initially assigned dollar values to WU, we did it based on empirically observed WU consumption of apps in the wild at various levels of usage and app maturity. So we don’t view it as a bug or pricing issue if our users can’t reverse-engineer the cost of a given operation based on the details we’ve shared about the WU algorithm, or if the cost of a given operation varies slightly.

From a tooling perspective, we took the same general approach to WU as Capacity: We think it’s most useful for you to be able to zero in on WU hotspots so you can implement high-ROI optimizations (rather than focusing on the WU consumption of things that don’t meaningfully contribute to the overall bill). For people trying to understand how to best manage WU costs, we highly recommend this BubbleCon talk by @petter.

We still aim to avoid extreme variance in how much an operation costs, and we want to avoid really inefficient implementations (which tend to be both WU-expensive and time-expensive, as well as costly to us on the infrastructure side). That’s why we have made certain changes to our implementations in response to user feedback about WU. However, we want to avoid frequent modifications, and only prioritize responding to extreme examples.

Second, observability and monitoring. As pointed out upthread, we were exploring ways to make visualizing and drilling into WU usage easier. That work is paused for now. We still see a lot of room to improve on our tooling, and I don’t see what we have today as the vision state. That said, we have limited engineering time, and we decided to prioritize the improvements in the building experience that I describe in my above update over improvements to our monitoring tools.

Not to raise hope, because this is still very exploratory, but we’ve been looking at the possibility o fbuilding better observability tooling as a Bubble app, both for WU and Server Logs,based on work the Flusk team brought with them to Bubble. This would allow us to unleash the full power of Bubble to iterate lightning-fast on our tooling, while minimizing the impact to our other engineering priorities. We’re still proving out the technical feasibility of this approach; the amount of observability data that Bubble generates is truly massive, so I don’t know if it will be feasible in the near-term, and can’t commit to anything right now.

Expression Composer

We strongly agree that there is a lot of room to improve on our expression composer, especially with long, complex expressions. While we don’t have short-term plans to change it — it’s a very technically complex, high-risk surface area for us to touch — we think we can make it easier to work on complex expressions in Bubble through better modularity and expression re-use, which should make it possible to break complex expressions down into simpler, reusable parts. While we don’t have anything designed yet, the team is actively discussing what this might look like.

Our research into better modularity is in no small part because of all the feedback we’ve gotten over the years. It’s also coming out of our AI efforts: Our AI engineers have identified a number of improvements we can make to the core Bubble language that would make seamless collaboration between humans and AI easier, and facilitate AI generating higher-quality output.

I know there’s concern that our AI work is a distraction from the fundamentals of making the Bubble language and editor great to work with. We see it very differently — we are building toward what we believe is the future of visual development: developers rapidly switching between AI commands and manual editing. This means adding AI functionality to the editor, but will also mean improving the editor’s usability and creating a fluid interface. That’s why a lot of the editor UX work we’ve been doing is with the goal of integrating AI in mind.

I also want to say that we are still devoting a lot of engineering resourcing to making our underlying technical platform great, which improves performance, reliability, and the speed at which we can ship new features. I don’t talk about it as much in these updates, because most of the recent work has been under the hood and doesn’t manifest in things I can point to. That said, we expect to release some major improvements, especially on the database side, by the end of March. We very much believe in the fundamentals of Bubble — we just think AI is part of those fundamentals, because AI-assisted development is quickly becoming the way that all software gets built.

Miscellaneous

That’s actually pretty close to what Fede looks like in real life.

Hmm, it’s actually full-stack development! We’ve fixed the JD.

Thanks for all your great feedback – we appreciate it!

–Josh

24 Likes

Big, look forward to seeing this on existing apps

All I can say is WOW.

Bubble has become a disappointment factory.

I genuinely believe I have wasted my trust on Bubble for too long and have been wasting my money since the introduction of WU which you have just said is not a problem in your eyes. I wish you and Bubble the best @josh

Funny to read this:

And this:

In the same reply. Where is the transparency if we cannot know exactly what operation should cost? What is explained in the WU documentation should be what we should pay for. Point.

But for WU bugs, there’s more than just the calculation! What about the is 0 or > 0 that cost more than >=1 ? What about charging to apply privacy rules? There’s a lot more things that I forget…

slightly ? Like reported by @boston85719 … 0.62 VS 2.12 is not slightly!

However, even if I’m not really happy about the time spent on AI stuff… I can understand where Bubble want to go with that and this make sense to me. I just hope that AI stuff and Mobile stuff can be released ASAP so Bubble can start to move on other (from my point of view) more important improvment (DB speed, search, expression composer, more core functions…)

5 Likes

People won’t be happy about this… but I think Josh is focussing on the right things. To make Bubble bigger, it needs to utilize AI and have a good native mobile builder. It needs that more then WU monitoring.

Maybe I can help users with a new tool to reduce WU more…

4 Likes

I agree. And not only is AI changing the way software is built; it’s also changing the way humans interact with software and digital devices. For many use cases, there will likely be a lot less click / tap, swipe, poke, query, filter nonsense. And that could in turn fundamentally impact how a Bubble app is developed. It might become mostly a process of iterative refinement - perhaps by speech, or even thought.

Now give me my Star Trek replicator and holodeck, dammit!

:smirk:

1 Like

You could call it WU Reducer :smirk:

I also agree that the “bigger picture” they’re focused on is the right direction of travel but also believe those paying for the platform should be charged fairly and in accordance with, well, how we’re being told we’ll be charge, not arbitrarily depending on one call to the next.

Even if it doesn’t make a material difference in most cases, isn’t it reasonable to expect that the cost of random bugs in pound notes (no matter how minor) shouldn’t just be passed onto customers?

I find it staggering that they find it acceptable practice and I’m feeling pretty let down as a customer.

Cases such as those @boston85719 has been methodically documenting should be addressed, regardless of how trivial you believe them to be, otherwise you are undermining transparency in your pricing system and damaging relationships with your customers.

3 Likes

They’re not really bugs though (or at least not on the level one might assume). At a foundational level on Bubble’s server, they’re charged correctly. It’s just that Bubble is so abstracted that it can be hard as a user to work out how your expressions are ‘translated’ in database and workflow operations.

1 Like

I get what you’re saying but one example was 3x identical WF’s that incurred different consumption. On the surface that feels like a bug, regardless of the internal complexity for Bubble to determine what expressions are translated into operations for their charging model.

It doesn’t feel quite right to me, irrespective of it, probably, not making a material difference.

2 Likes

Is this to say that the strain on the server placed upon it by other apps will affect how our app is charged for performing actions? Similar to a city road traffic congestion fee, that when more cars are on the road it costs more to drive on it?

You should when the differences are in the range of 211%…I mean I provide range estimates but I don’t say between 100 and 300, that would be insane of me. And most importantly, you should fix the bug in WU metric charts so that users can figure out the costs based on details you share in WU metric charts…obviously the debugging and hypothecating I’ve done to uncover and understand the Bug in WU metric charts is not really possible by beginner/intermediate users.

So two things here then, one should be to ensure we can zero in on WU hotspots for more than just a 48 hour period, so please add to those tools an ability to see the hour based WU metric charts for at the least the same period of time that is the apps retention window for logs.

Two, understand that a WU hotspot that most definitely does contribute to the overall bill in a meaningful way is an action to create a new thing, so if I am seeing that one out of 3 actions that all do the same thing, which is simply create a new record is 211% more costly than the other two, that is a significant contribution to my overall bill. I’m not nit picking, not trying to do any “gotcha” moment or anything. Just simply testing my WU optimizations while building for clients and seeing Bugs, reporting the bugs as if they were any other bug affecting an app, that is all.

211% higher cost for essentially the same action is not an extreme example? That example should prompt an investigation by Bubble to uncover why the WU metric charts are reporting such wildly different numbers…BTW, I’ve done it for you all and it is in the video linked at end.

That is great, please don’t allow that to take support effort away from investigating bugs reported by users about areas of the platform that may not be Bubble top priority, so that reported bugs can be addressed and the bugs resolved.

Issue is not that users are not equipped intellectually to work out how the expressions are ‘translated’, the issue is the bug in the way WU metric consumption is reported in WU metrics charts.

When we run a create a new thing action with a condition of ‘current user is logged in’ and when we run a create a new thing action without any condition, and that create a new thing action is the only action in the series, the WU consumption is 1.12.

Now, when you add more than one action into the series and in my original tests, I had 3, in the test in the video below I added more to be able to use my logical approach to debugging and reverse engineering to understand more clearly where the Bug is in the system, we see that ALL create a new thing actions have the same charge, with or without a conditional applied of 0.62 WUs, except for the last in the series which gets applied to it the aggregate count of all create a new things in the series multiplied by 0.5 WUs…breaking this down more simply we see that ALL create a new thing actions are in fact charged the same 1.12 WUs, so no issue with a Bug in system for charging WUs, BUT, the bug is in the WU metric charts and how the WU consumed for an action is applied in the chart breakdown or not.

What this Bug causes is an inability truly

look at the overall consumption of a system and drill down into particularly expensive operations and hotspots, which you can then try to optimize.

If the WU metric chart Bug shows me that the cost of one action is 2.12 but all others are 0.62, I’m going to drill down into that one action that costs me 211% more than the others…But of course, that is not what I should be doing since the 2.12 WU applied to that action is not accurate due to the Bug in WU metric charts.

So much of my time would not have been wasted on this if the Bug in WU metric charts didn’t exist. I would say time of others would not have been wasted reading my rants on this, nor would @josh time been wasted having to reply to the topic, support time not wasted on fielding the reports…Now, what we do need is Engineering Time well spent on fixing the way the WU metric charts are applying the WU consumption onto particular actions.

6 Likes

I’ve bought into this in just the last week or so. I was starting to think dark thoughts about Bubble losing relevance. But Bubble Assist and @NoCodePete changed my mind.

AI tooling in the editor is not a distraction - but vital to keep Bubble relevant.

5 Likes

I understand your point!

But as someone who also works with coded solutions like vercel and cloudflare. This also happens on a day to day bases. Even executing the EXACT SAME code leads to different usages (in cpu time).

Thats why I think they only give averages / total usages per executed function instead of the insights Bubble is giving per action.

3 Likes