Hey all – like @jayvee.nava said above, we appreciate the thoughtful feedback. I want to share our thinking on a few topics, because I know most of the commenters in this thread are deeply invested in Bubble and we try to be transparent.
WU
Two things:
-
The algorithm / bugs
-
Observability and monitoring
Starting with the algorithm. We’ve seen @boston85719’s detailed investigation and reports, and I appreciate your thoroughness. For the points illustrated in the majority of these reports, we are not planning to make changes to the WU algorithm in the near future. To explain why, I need to share some context.
For Bubble old-timers, you probably remember that WU replaced Capacity, which was the metric we used for rate-limiting applications prior to WU-based pricing. Capacity was calculated by measuring your apps’ actual CPU consumption on our shared main cluster servers, as well as round-trip times to our databases.
Capacity was dependent on the internals of Bubble’s server software, which are continuously changing as we improve the platform. It was also affected by things like how warm our caches were or how much load our servers were under at any given moment. That meant there was no way for you to predict how much capacity a given operation would take, and no guarantees that it would be consistent from run to run.
Given that, we built tooling for managing Capacity inspired by engineering performance profiling tools, which typically take a top-down approach: You look at the overall consumption of a system and drill down into particularly expensive operations and hotspots, which you can then try to optimize. That’s usually more effective than trying to write each line of code in an “optimal” way. Most engineering teams are trained to write simple code (since simple code is easier to maintain and speed up if necessary) and only worry about the performance of code that becomes a clear performance bottleneck.
When we decided to move to usage-based pricing, we built WU, which is more stable than Capacity because it replaces measurements of clock time with measures of the amount of work actually performed, like the number of bytes processed by a given operation. This results in less variance as we make code changes, but the resulting metric is still tied to Bubble internals. We did publish the list of low-level operations and the weights we assign to them, but how many of those low-level operations we do per user-visible activity (like running a workflow action) still depends on our code implementation. That means there will be variance from run to run, depending on the internal state of our servers as we process each request.
Our WU implementation is fairly simple under the hood. The complexity is in the rest of Bubble’s code and how often it performs the various operations that WU measures. When we initially assigned dollar values to WU, we did it based on empirically observed WU consumption of apps in the wild at various levels of usage and app maturity. So we don’t view it as a bug or pricing issue if our users can’t reverse-engineer the cost of a given operation based on the details we’ve shared about the WU algorithm, or if the cost of a given operation varies slightly.
From a tooling perspective, we took the same general approach to WU as Capacity: We think it’s most useful for you to be able to zero in on WU hotspots so you can implement high-ROI optimizations (rather than focusing on the WU consumption of things that don’t meaningfully contribute to the overall bill). For people trying to understand how to best manage WU costs, we highly recommend this BubbleCon talk by @petter.
We still aim to avoid extreme variance in how much an operation costs, and we want to avoid really inefficient implementations (which tend to be both WU-expensive and time-expensive, as well as costly to us on the infrastructure side). That’s why we have made certain changes to our implementations in response to user feedback about WU. However, we want to avoid frequent modifications, and only prioritize responding to extreme examples.
Second, observability and monitoring. As pointed out upthread, we were exploring ways to make visualizing and drilling into WU usage easier. That work is paused for now. We still see a lot of room to improve on our tooling, and I don’t see what we have today as the vision state. That said, we have limited engineering time, and we decided to prioritize the improvements in the building experience that I describe in my above update over improvements to our monitoring tools.
Not to raise hope, because this is still very exploratory, but we’ve been looking at the possibility o fbuilding better observability tooling as a Bubble app, both for WU and Server Logs,based on work the Flusk team brought with them to Bubble. This would allow us to unleash the full power of Bubble to iterate lightning-fast on our tooling, while minimizing the impact to our other engineering priorities. We’re still proving out the technical feasibility of this approach; the amount of observability data that Bubble generates is truly massive, so I don’t know if it will be feasible in the near-term, and can’t commit to anything right now.
Expression Composer
We strongly agree that there is a lot of room to improve on our expression composer, especially with long, complex expressions. While we don’t have short-term plans to change it — it’s a very technically complex, high-risk surface area for us to touch — we think we can make it easier to work on complex expressions in Bubble through better modularity and expression re-use, which should make it possible to break complex expressions down into simpler, reusable parts. While we don’t have anything designed yet, the team is actively discussing what this might look like.
Our research into better modularity is in no small part because of all the feedback we’ve gotten over the years. It’s also coming out of our AI efforts: Our AI engineers have identified a number of improvements we can make to the core Bubble language that would make seamless collaboration between humans and AI easier, and facilitate AI generating higher-quality output.
I know there’s concern that our AI work is a distraction from the fundamentals of making the Bubble language and editor great to work with. We see it very differently — we are building toward what we believe is the future of visual development: developers rapidly switching between AI commands and manual editing. This means adding AI functionality to the editor, but will also mean improving the editor’s usability and creating a fluid interface. That’s why a lot of the editor UX work we’ve been doing is with the goal of integrating AI in mind.
I also want to say that we are still devoting a lot of engineering resourcing to making our underlying technical platform great, which improves performance, reliability, and the speed at which we can ship new features. I don’t talk about it as much in these updates, because most of the recent work has been under the hood and doesn’t manifest in things I can point to. That said, we expect to release some major improvements, especially on the database side, by the end of March. We very much believe in the fundamentals of Bubble — we just think AI is part of those fundamentals, because AI-assisted development is quickly becoming the way that all software gets built.
Miscellaneous
That’s actually pretty close to what Fede looks like in real life.
Hmm, it’s actually full-stack development! We’ve fixed the JD.
Thanks for all your great feedback – we appreciate it!
–Josh