We just added a a new analytics tool under Logs section in the editor where you can view a detailed breakdown of your application’s demand in terms capacity. This new tool is particularly useful when trying to understand capacity usage amongst all aspects of your applications.
You can now view breakdowns of Workflows, Page visits, Searches, App Version and Overall usage. Clicking on a section in the chart itself will redirect you to the area of the application for easy editing or provide deeper insight about its usage.
Using the date input field, you can view capacity usage from that date & time (for instance a spike in the capacity usage chart above) to intervals of either 10 minutes or 1 hour.
Thank you for this! It was always quite a mystery as to what was causing the spikes in capacity. Are there options for increasing the view beyond one hour? I see an option for “view usage analytics since…”, but can’t get it to work. All I can view is for the last hour. Helpful, but not as helpful as viewing over a longer period of time.
Is this just referring to loading certain pages of the app (for example, pages with a video background)? Do you have any suggestions on what we can look to improve when these are the two reasons for reaching capacity? @kevin2
What you’re seeing is the ID of the workflow/api endpoint. It may take up to a few seconds to load the actual name. In any case, you can click on that section of the chart, and you should be redirected to the that workflow/api endpoint in the editor.
In terms of optimization to avoid capacity issues, does anyone know of resources that speak to best practices for optimization with Bubble?
I hope this is on topic with this thread as it’s a capacity issue – I have a user signup flow of 6-7 screens (it’s a mobile app, that’s why so many screens) and 9 times out 10 it maxes the capacity and an error gets thrown.
I upgraded my app to the Professional plan, hoping this would solve it, and I’m only one user testing the signup process and the Pro plan comes with “2 units of reserved server capacity”, yet the issue persists.
I am just not getting it. I have an application which runs a workflow to create thing and 20 things as children of the 1st thing. I have my 2 units of spent $20 and added another unit. I am the only user as I’m still testing and it maxes out. It just doesn’t make sense. I’m sure it’s me or the way I’ve built stuff.
I was on the legacy plan which I understood I could remain on which as I understood it was based on workflows. Nevertheless I was getting notifications that my application was maxing out and moved over.
I’ve read all the threads on this topic per it’s still just a bit of a mystery to me. I’m not complaining just making a statement, bubble had responded on another thread on this topic which I felt I understood… Obviously not
Hi Andrew, thanks for the comment… When you mention “a big deal” is the creation of 20 things onerous in terms of resource consumption . For the past few months of learning, much of what I’ve read from other posts in the forum is centred on workflows and either the creation or manipulation of lists of things. It seems to be the bread and butter of bubble.
It is indeed an api workflow. It’s a production app otherwise I’d share a link of course.
Based on a user action I’m creating a thing and then based in that thing an api is being called to create child things. I’ve run it upto 20 although it could be more if this ever gets off the ground. I’m nervous of running into on more for fear of causing a global blackout.
There is nothing left to optimise.its the bear bones
I probably used the wrong words to describe my thought by choosing “big deal.” I’d say, just based off your quick post, that the creation of those 20 things is probably where you’re hitting the limits. Does the new chart give you any insight into where the bottleneck is occurring?
Blue equates to 6% which was uploading a single file
Green 25% API workflow to create a list of 30 things
Red = the remainder which was an API workflow to populate a repeating group
So if this maxes out my application on a seemingly mundane action for me as an individual user I’m trying to extrapolate this to a scenario where I have a handful of users for example using my site using application to try and determine exactly how much capacity in terms of extra cost I need to plan for
Just as a random thought as the logic of your application is unknown, but do you really need to create 30 things each and every time this action is called?
Maybe there is another simpler way from the pure logical perspective?
Like, for example, creating 30 items in your database as templates, and then using Copy a List of things, then Change a List of things each and every time the action is triggered?
So unless each one of the 30 objects should have a unique value, maybe Copy/Change is the way?
Or maybe a combination of both - copy the list of 30 things, then run an API WF to update all of them in the unique manner? Should still be faster than creating each one of them every time.
Or maybe even not to create all 30 at the same time if that’s an option?
The thing with scheduling an API WF to run multiple (30!) times based on my observations is that even if one step by itself is fast, the whole queue is being processed by the system sequentially, so it puts a small buffer in between each step, hence it takes more time to be processed.
Thank you for your interest and consideration. Unfortunately its one of those private things which make it annoyingly secretive so it actually probably unfair me asking for help without painting a vivid picture.
Maybe I could use an analogy
Let’s suppose we have a bubble application. This application would host content from various third-party sites… These sites could be like videohive is just an example. Now let’s just suppose a member of videohive who has a portfolio of “lets say” after-effects templates. Now when this member uses this bubble application he or she creates an account. This is the master thing. Now let’s just suppose videohive had an API. And that API allowed us to take his or her details and make a get call to collect all this members after-effects templates. Each template would have a different name, price and other relevant details but the basic format is still the same. It’s a videohive template. And let’s just suppose he or she had 30 templates I would create the template thing and associate that with the master thing. In essence this is a im doing
Tthis member could have 10 templates or could have 100 and it’s not dissimilar to my application I used an example of creating 30 things but it could more or less.
I had presumed this was the optimal way of running an API workflow to create a master thing and child things. Each child thing has a number of fields so I cannot use a list field in the master thing although I can easily write back to the master thing a list but that’s possibly just adding an unnecessary step.
If I’m correctly understanding your guidance would a better approach be to create a number of child things and then update these?
I did not realize there is an external API involved here.
This brings a lot more possible questions and points to improve, but essentially here’s what can be done:
Probably simply changing creating to creating&updating will not affect the performance by much, since you are going to be creating the same amount of objects.
What I would suggest to check is to try and limit the number of calls to an external API.
The low hanging fruit here would be if you are making a new APIWF call to an external API each time you are creating the child object.
Try to pull the full list in one shot, store it locally, then process if with an API WF cycle, for example.
Invisible Repeating Group to use as a container for API data locally can serve that purpose.
Then once you pulled all 30 things to a repeating group, you can use the RG as a source for the schedule the API WF on a list action, and see if it saves you some time.
If that helps, I believe it is of less importance if you are creating the things with a reference to a master thing from the start, or updating them after they are created, but probably the optimal way would still be that - create everything, then update the master object once with a list, not 30 times.