An Update to Workload, Plus More Transparent Calculations

Choose your fighter…!

Screen Shot 2023-04-13 at 10.17.57 PM
Screen Shot 2023-04-13 at 11.28.36 AM
Screen Shot 2023-02-22 at 11.31.11 PM

23 Likes

@mac2 I pay one of them $6 a month for a plugin I could do better myself if I had the time. I pay the other one $133 for an app. WHO WILL REIGN SUPREME!???

@mac yours is honestly the best post I’ve ever seen in the Bubble forum. And I have read so very many of them. (Check my read time.) FATALITY… MAC2 WINNNS!!!

2 Likes

I’m leaving Bubble now. Not because of the pricing fiasco but because I’ve peaked. Thank you, folks; good night & goodbye. :microphone: :arrow_down:

7 Likes

Effed up my at reference to you in the followup. Sorry @mac2. Anyway edit for clarity @mac2 wins.

1 Like

thanks @keith - great post

I’m waiting to for my pie to bake too, as was greatly put by @gf_wolfer

but the picture is looking MUCH better

if it stays like this, that would be great

2 Likes

Hi all, following up with a few quick updates and some answers to a few of the questions:

  • The pie chart now shows both % and absolute WU. Thanks to the couple people who made that suggestion!
  • You can see WU usage by your workflows in the Logs tab now
  • Re: the reports of “Deleted” pages showing up in the pie chart, as people on this thread correctly surmised, that was coming from visits (probably by crawlers or script kiddies) to pages that didn’t actually exist. We can confirm that we do not use Wordpress as part of Bubble’s primary tech stack! As of this morning, we changed the code such that we don’t count any WU from visiting missing / 404 pages. If you are seeing new data with mysterious pages, please file a bug report. Note that you will see “Deleted” in the case where: a) you had a page, b) someone visited it, and then c) you deleted it afterwards – that was what the “Deleted” indicator was supposed to be about.
  • We are still following up on a few other bug reports about WU calculations, but should have them all fully and investigated and fixed well before we actually make the new plans available on the 1st. All the outstanding issues we are currently investigating are minor and should not make a meaningful difference to your app’s workload computation.

To answer some of the questions:

We plan to add the chart shared in my original post to our documentation, and to keep it up-to-date. The most likely scenario for us making a change would be us adding a brand new feature to Bubble, which would not impact the consumption of existing apps until they start using the new features. We may also choose to reduce WU weights in the future to pass along improvements we make on our end.

I’m passing this feedback along to the team: we’ve been tackling a number of similar quality-of-life improvements lately and always hungry for more to add to the list.

Yes, one of the main goals of the pricing change is to enable us to hire engineers faster! We would like to be roughly doubling the size of the engineering team each year for the next couple years. Right now, each one of our engineering teams is stretched very thin across a wide surface area, which limits the number of improvements we can make in parallel. As we hire more people, it frees us up to move faster, including on improving our scaling and our geographic hosting.

That’s a good suggestion, I’ll pass it along to our design team

This is a really interesting point; thanks for bringing it up. After this morning’s updates, which were driven by investigation into use cases reported to us by the community, we’ve adjusted the weights from the original regression, so I am not sure going back and trying a geometric mean would be a net improvement on what we just released. Also, while we are aware that infrastructure usage follows a power-law distribution across different applications, our analysis was more narrow: we were looking specifically at the relationship between different activities and the time it takes our code to run a single execution thread containing those activities. That relationship has more to do with the implementation details of our code than general principles about computation: you can write code with any kind of relationship between its inputs and its execution time (ie with different big-O values). In practice, we found that a linear regression performed fairly well against the code we were analyzing. Very fair question though!

Front-end actions do not consume workload (although reloading a page sometimes involves hitting the backend). For the Bubble power users out there, an easy way to tell if something is purely frontend is opening the network tab of your browser and seeing if it kicks off a new request to the server – if no new requests, it is frontend only.

They do count, so I do not anticipate this being a cost savings.

On our list!

Thanks for the feedback – we’ve been taking a look at our url structures and will likely make changes to make them more efficient and usable

Look out for an announcement related to this coming very soon!

This a good suggestion. Our goal is actually to have better ways of manipulating data than recursive API workflows (which are not intuitive and easy to make mistakes with). We are looking into building the equivalent of “for” loops, as well as efficient bulk data transformations.

No difference. When you have a “Make change to thing” on the frontend, we simultaneously execute it on the frontend so that the browser updates immediately, while also executing it on the backend (to update the database). The frontend execution does not consume WU, the backend does, so it is equivalent to a backend workflow. In practice, though, running a backend workflow generally involves an additional action to schedule it, which would cost additional WU, so a frontend workflow is the way to go if there isn’t a specific reason to use a backend workflow.

Each month. I updated the original post to make it clearer.

Thanks for the suggestions! We’ve built emails and in-editor alerting as your app approaches its max WU consumption, and the ability to cap WU spend if you do not want to incur overages. I like the idea of being able to customize the alerts and to use a webhook, passing the ideas along to the team. I’ll also pass along the idea of accessing the data to supplement / replace 3rd party analytics tools.

The charts still work as-is, actually. The way to think about the free Development WU on paid plans is to not count it when calculating your total WU. We are going to update the display in the editor to do that automatically; it will show you the Development WU used, but will show a total with the free Dev WU automatically subtracted. That said, check out the calculator we released – it is much easier to use that then the break-even charts!

Yes, exactly. To give a concrete example, if you are on the Starter plan, and one month you use 100K Live WU, and 120K Dev WU, your total is 100K + (120K - 100K) = 120K, which is below the 175K Starter plan limit.

This is on our design team’s radar!

Anything that does not result in a request to the Bubble server (as seen in our network tab) won’t consume WU

We de-bounce auto-binding, meaning that we wait for the user to finish typing before updating the server.

Only actual page refreshes that load the whole page HTML from scratch consume WU. Updating the URL on a single-page app will not consume WU.

The only API calls that consume WU are ones explicitly created by the user, not backend ones that Bubble does to support our geolocation and maps features. That said, I am going to double-check to make sure that we are not counting that workload.

This is on our list of things to improve to make Bubble faster and more performant. Today, though, Bubble does fetch the entire record

70 Likes

Thanks for the late night detailed response :heart:

6 Likes

I haven’t had the time to fully read everything and I’m also waiting on seeing an average of my WUs over a longer period of time, but as of now, things are looking MUCH better. Thanks @josh and your team :pray:

1 Like

Damn. Now THAT is an @josh response.

:billed_cap:

20 Likes

Thank you @Josh. It might also be a good idea to allow a free trial to existing users of the new plans before switching over. I am thinking 15-30 days perhaps. This will allow us to get a “feel” for the benefits the new plans bring us (or don’t) before switching within the 18 months grace period. The trial should not charge us for WU’s, but be more customer focussed so we can experience the performance improvements ourselves in order to gain confidence regarding the new plan performance before committing entirely.

1 Like

Someone is listening!

I actually want to thank all the buuble’s community members who have taken the time and effort "One of them even created a calculator for god’s sake :smiley: ", and everyone who analyzed and gave suggestions through the past few days, you are the best-added value if someone is thinking of using bubble :heart_eyes:

If it wasn’t for you, our concerns wouldn’t be explained and presented as well as you did! :heart_eyes:

I think I can leave this thread now, focus again on my business, and dream more!

Again thanks for everyone’s time and effort in communicating the recent pricing concerns for all of us :blush:

And of course thanks @josh for your detailed reply :slight_smile:

13 Likes

Thanks for this reply. Still too early for me to fully move on from how this all has been handled, but this is the exact type of response a company takes when they care about their community which before the last 2 days I was sure you guys had given up for enterprise.

Thanks for that @josh :pray:

4 Likes

Still hangin’ tough:

Also:

2 Likes
  1. Pretty good by whose measure? Has this work be validated by a regulated and certified statistician of a national statistically academy?
  2. Your entire methodology hinges completely on the completeness of the instrumentation of your code base with logging events. It is very easy to end up with “memory leaks” with this methodology and fail to account for all the branch points in the code.
  3. Assuming you actually did get the instrumentation of the study correct with 100% coverage of all branches, here is an improved statistical methodology:

The model that best fits the instrumentation of your capacity study is to treat resource consumption (GB-s) as an exposure denominator in a correlated multivariate Poisson model. In this case the counts of the Poisson process for each Gb-s of computation corresponds to the multinomial occurrence of logging events during code execution.

You would then need to Maximum Likelihood Estimate the parameters of the multivariate correlated Poisson process to get your “weights”. Which are actually just Poisson rates and their correlations. Next you will need to simplify your model using Likelihood Ratio testing to reduce the amount unnecessary parameters in the model. Follow this up with using the Fisher Information estimator for the covariance matrix of the parameters, which is handy for generating standard error ellipsoids, a necessary component for normal approximation.

To actually use the weights you will need to workout the multivariate Gamma distribution which is the standard “exposure dual” of the multivariate correlated Poisson process. Then apply the “weights” multiplied by the observed event log counts of each application to the multivariate Gamma distribution. Finally the expectation of this distribution is the estimate of compute resources used by the application, preferably providing intervals that incorporate both the uncertainty intrinsic to the multivariate Gamma distribution and the uncertainty due to estimation of the parameters from the previous step.

There are strong non-linearities present simply due to the way you have instrumented your study, namely logging events per resource consumed. It takes a subtle touch to fairly account for these.

22 Likes

@josh, this guy @aaronsheldon seems really smart it’s scary. It’s not too late to continue the WU calculation improvements!

9 Likes

Fantastic reply @josh - one thing that i’m curious about re: scheduled workflows:

We often think it’s good practice to move important workflows to backend workflows in our apps, especially when they handle sensitive data or when we expect this action to be shared by multiple pages in the future.

App Organization and readability is a good reason for this too - we like to separate thinking about backend from thinking about frontend and make the pages as lean as possible so they are readable & anyone could replace some frontend work.

So Scheduled Workflows currently consume quite some WU, but if I understand it correctly, what you’re saying is a backend workflow is essentially the same as a frontend workflow which also do a call to your app’s API.

Since we are scheduling a lot of them at the current date/time, would it make sense to create some kind of ‘Run backend workflow’ action instead that always triggers directly & consumes less WU? Or does just the fact that it’s a defined endpoint increase the load on Bubble’s side?

I think having cron jobs / future scheduled workflows consume more WU makes total sense as you won’t be doing that for the majority of your app’s logic, but it would be nice to still be able to use the backend to organize.

7 Likes

Cheers @josh
This sounds very promising
All the best for Bubble’s growth plans!

Caution @tylerboodman. Not everyone here knows what they’re talking about. Which reminds me… NOBODY here knows what they’re talking about.

8 Likes

I wonder if, apart from the message about WU consumption, I will be able to set the maximum monthly WU level for my application, at which the application, for example, will call a 404 page. Such a solution would reduce the financial risk for the bubblers. e.g. if I have 175k Wu to use, I could set, for example, max 250, knowing that I do not want to pay more without my knowledge, after exceeding and checking, I could change it to, for example, 300k to restart the operation, etc. This will reduce the risk

1 Like