Forum Academy Marketplace Showcase Pricing Features

The Ultimate Guide to Bubble Security is out - 300 pages of privacy and security content

I’m very happy to announce that the Ultimate Guide to Bubble Security is now out!

Buy it here

7 months of work, a lot of research, several rounds of feedback from power users and from the Bubble team has resulted in more than 300 pages of illustrated content and approaches security and privacy from three sides:

Bubble’s security framework
Your Bubble application is the end result of a mind-blowing chain of security measures that you mostly know nothing about: the physical security of the servers against hackers, physical intrusion, natural disasters and undersea, iron-clad cables, database encryption, password hashing and salting, user management and server-side actions. Your app is covered by the same security setup and protocols that protects huge companies like Adobe, Netflix and AirBnB and organizations like Harvard Medical School, the European Space Agency and recently the NSA. And there’s nothing you need to do to maintain that - it’s simply set up for you, monitored 24/7 to maintain your uptime and security every day and night of the year. This section explores how your application data is protected from both a hardware and software perspective - so that you can no what you are investing in and/or speak to clients about security with confidence.

How to think about security and privacy
Security and privacy is all about decisions. Thousands of them. We’ll explore how you approach thinking about the policy that guides these decisions as you build, maintain and update your app. Security is not just a result of technical proficiency, but about using sound judgement and respecting Users. Many of the biggest data leaks in the recent decade have happened not as a result of technical glitches or weak security, but because a decision to keep the data private had never been made in the first place, and the data was simply there for the taking. Setting up a security and privacy policy is not just a dry legal document - it’s a promise to your Users and a strategy to build a brand that radiates trust and predictability.

Building secure applications
Finally, we’ll dive deep into the technical side of security. Bubble offers strong security, but doesn’t enforce it - you’re free to expose most data as you please. What this book will attempt to do is to fill the knowledge gap on the things you didn’t know that you didn’t know - so that every decision you make from there on regarding security is a conscious choice and not an oversight. We’ll look into how to secure your account, how to think about on-page security, what data Bubble reveals in its source code, securing API data and workflows, securely redirecting Users, securing data with Privacy Rules and many other details that together make up the totality of your app’s security.

I’ve put a lot of hours into making this an up-to-date and correct guide, but security is a wide and complex topic: I welcome all feedback if you think something’s unclear, missing or incorrect.

I sincerely hope you’ll enjoy the read and look forward to discussions :slight_smile:


Noice!!! Thanks for sharing @petter


Been waiting for this for months and it couldn’t have come at a better time!
Your previous book built my bubble performance knowledge foundations and I’m sure that this one will do the same.
Thanks for your work @petter

1 Like

Thank you for putting this together – looking forward to reading this!

1 Like

Purchased. Thank you!

1 Like

Great work! :clap: :clap: :clap:

Thanks Petter, just bought it!

1 Like

Got it already! Great job.

1 Like

Thank you guys, I hope you enjoy the book!


Great info and brilliantly put together @petter!
Really appreciate all of your efforts in breaking down this complex info into something accessible.
Keep up the great work!

Thank you @petter for another spectacular publication! Between this and your Performance book and other tutorial, I now have a firm grasp on DevTools.

Some follow up questions for you:

  1. You mention all info saved in all option sets is downloaded on all pages (144) – does this include notes/comments we have typed on the option sets? Same question for comments on Data Types and Fields.

  2. Your describe that Privacy Rules generally do not affect workflows (134-135) because users can still edit data without being able to view it, and you give an example of a server side workflow (Make changes to a thing). However, isn’t it true that for client side workflows (e.g. Set state) that a user cannot edit data without having access to view it – because an applied Privacy Rule would prevent Bubble from sending the data to the client’s device? In other words, if I want to hide data from a user, but want that data to be used in a workflow triggered by the user (e.g. as part of a Condition), then I need to make sure the workflow happens entirely on the server, or make the workflow a backend workflow. Right?

Thank you!

1 Like

Thanks again for the book Petter!
You saved my app from having laughably huge vulnerabilities. And you are completely right in saying that we as early-adopter developers have the responsibility to keep Bubble’s reputation high.

I have been struggling to authenticate Incoming API calls from Stripe (webhooks) and I’ve had to set the API workflow to “Can run without authentication”. I imagine that there must be a way to do this because many Bubble apps use Stripe. Do you have any tips on how I can go about securing the webhook?

Amazing book as usual! Thanks Petter!

Bubble supports Bearer Token authentication. You will need to generate an API token in your app’s settings page. After that you need to ensure Stripe adds a Bear Token Authentication field to the HTTPS Header:

Authorization: Bearer BubbleAPITokenHere


  1. All data generated by a Bearer Token API call will have the Created By field set to Admin.
  2. The token is not scoped to a particular Workflow or Data API. It grants access to all publicly exposed Workflow and Data APIs.
  3. As such make sure your token is encrypted in transport, specifically only use HTTPS with your POST requests.
  4. Additionally make sure your token is encrypted at rest on the calling server.

In this context HTTPS provides a double role of encrypting any data being transmitted and asserting the identity of the Bubble application through trust certificates. Likewise the Bubble API Token held by the requester asserts the identity of the requester.


Hi @greg18

Thank you for the kind words!

1 - Notes/comments
No, notes/comments are contained within the editor files and do not transfer to the live app from what I’ve been able to find. If you perform a network-wide search in devtools (this short article describes how), you’ll find the comment if you inspect a Bubble editor tab, but not when you inspect a live app. This is true even if you reference the option set somewhere on the page. Note that I have only tested this for option sets, but I would expect that logic to apply everywhere.

2. Privacy Rules and Actions
This is a great question and I’m glad you brought it up since it’s a potential vulnerability. This will be a longer reply, so buckle up. I want to try and shed some light on two parts of it to clarify.

The first is the importance of understanding the difference between not being able to write to a Thing because of Privacy Rules (by disabling auto-bind) and not being able to write to a Thing because a Thing cannot be found (by disabling Find this in searches).

As you say, you will not be able to save changes to a database record that Bubble can’t find (because of Privacy Rules), but it’s important then to understand that it’s not the action that is stopped on a server level: the action (i.e. Make changes to a Thing) is running just fine, but since the record can’t be found (protected by Privacy Rules) there’s technically nothing to make changes to, and so the pen is there but there’s no paper to write on, so to speak.

While this in many cases can theoretically provide the protection you need, there are potential scenarios where this can give you a false sense of security. To understand this, we’ll need to define the difference between a search query and a lookup query. Performing a search means to send a call to the server with specific constraints to return a list of zero, one or more matches (such as all users with the last name Amlie). A lookup means to send a call to the server to fetch one record based on that records index key (what in Bubble is called its Unique ID). In that case you’ll only ever produce one result, and always the right one (such as the User with the Unique ID 163247823748237)

The challenge for Bubble developers from a security standpoint is that it’s not always clear when one method is used over the other. Privacy Rules are 100% consistent in how they function: the Find this in searches setting will completely disable the data from being found in a search. But when the Thing is referenced directly by use of its Unique ID, the query type changes to lookup and circumvents the Find this in searches rule. In most cases this is expected, but in other it can be confusing. I’ll illustrate this with three scenarios that I personally would rank as follows:

  1. completely expected
  2. not obvious but understandable
  3. surprising (but explainable)

All scenarios will discuss a User A that wants to make changes to a User B, and we’ll assume that the User type has the strictest Privacy Rules applied (can’t find in searches, can’t autobind and can’t view any fields):

Scenario 1: Direct database reference saved on Current User
In the first scenario, we’ll reference a Thing by use of a direct database connection: User B saved in a Field on User A such as Current User’s Boss. A Make changes to a Thing workflow using that field would would predictably work regardless of Privacy Rules, since this is a lookup (we’re not searching for the Thing):

The only Privacy Rule that can stop the above action from completing successfully is if the field Boss on the Current User is hidden, in which case the action will still run, but it will be trying to make changes to an empty record. But no other Privacy Rule will stop this change from being made.

Ok, everyone who knows their Privacy Rules knew this one. Onto a less obvious one:

Scenario 2: URL parameter
In this example, I’m in an app that relies on URL parameters for navigation and for fetching data. Someone has emailed me (User A) an URL that points directly to an Edit user form for User B, such as

When loading the edit form, User A will not see any of User B’s information since all fields are hidden, but User B is loaded into the form - since the Unique ID is not hidden by Privacy Rules and I’m not relying on a search to find the User. Technically, this is a lookup In other words: if I make any changes to that User with an action, those changes will be saved to User B with no problems:

Setting up a URL like the above is fairly common, and I don’t see any reason to discourage it, but I wouldn’t be surprised if even experienced devs reading this are unaware that the record is technically there and can be made changes to. Many experienced devs are indeed surprised to see that the Unique ID field can’t be protected by Privacy Rules at all, since there’s rarely any reason to look for it.

The reason this works is the same as in scenario 1: since Bubble has the Unique ID of User B, it’s not performing a search behind the scenes; it’s performing a lookup, which Privacy Rules don’t protect.

Example 3: Do a search for with Unique ID as constraint
In the third example, we’ll do the same as above. We’ll assume that someone sent User A the Unique ID of User B. But this time, we’re not passing that ID through a URL parameter. Instead, we’re performing a Do a search for and making the changes to the result of that search. Since we have not enabled Find this in searches, surely the result will be empty and the action will attempt to write on an empty record:

What you’ll find if you try this is… * gasp! * that even this action will complete successfully. You’ll find the record even if it shouldn’t be searchable, and the action completes as if no Privacy Rules had been set up.

Why is that? The reason is the same as in the earlier examples: Privacy Rules adhere to what’s happening under the hood. Whether we as the app editor thinks we are performing a search is irrelevant to Privacy Rules - it only cares about how Bubble actually sends the query.

As soon as you use a Thing’s unique ID as a constraint, the query changes from search to lookup and is no longer protected by Privacy Rules - you’ll find the record with no problems and can make as many changes to it as you like.

Should it be this way? Honestly – I’m not sure. From a traditional web development perspective I’m sure this makes perfect sense, but to a Bubble developer with no knowledge on SQL queries it makes Privacy Rules appear inconsistent, since we’re obviously on the surface performing a search here.

All of these scenarios can and should be easily solved of course by placing an additional server-side condition on the action itself. In other words, you should never rely on a record being empty as a means of security: as long as the action is allowed to run, there’s a risk of some scenario where it’s not empty after all.

I’ve flagged the scenario to Bubble as a potential vulnerability, but it’s not given that they see it that way. Since it’s technically correct (and maybe impossible to change without breaking a lot of other stuff) there’s a fairly big chance that this is simply how it’s supposed to behave and they’re not going to change anything. From my perspective, the number of apps affected by this is likely quite small, but not non-existent: I am concerned that some devs are not aware of it since it’s not really communicated anywhere.

I’ll probably address this issue in an upcoming update to the book or a separate article, but want to get some clarification from Bubble first and at the very least get a confirmation that I’ve interpreted this all correctly. Everyone reading this should be aware that until we have it confirmed, my understanding could be incorrect in a minor or major way.


So, if some javascript guru knows the id of a database item and they know the names of your fields they can change the values of those items via the console of the browser and we can do nothing to prevent this?

If this is the case. Then we need a privacy option to determine if a user may edit a certain field.


This is the pressing question indeed. As of now I don’t know exactly how this is handled: but I’ll post an update as soon as I do.

Quick update (@mike_verbruggen):

I talked to Bubble about the possibility of using Javascript to “fake” a server-side Bubble action and exploit the fact that a lookup will produce a result regardless of privacy rules. As suspected, it’s not quite that easy.

The way I understand it based on these conversations is that Bubble doesn’t accept any server-side actions that it doesn’t recognise as part of the app itself. In other words, if an action exists on the page that makes changes to a database record (that could potentially be found via a lookup even when protected with Privacy Rules), there’s potentially a chance that a hacker would be able to trigger that workflow and make those changes. But - only if you as a developer left the action unprotected to begin with by not including conditions. As I touch upon in the book, constraints on server-side actions are also performed server-side (when possible), meaning that the action will be stopped on the server as long as the condition is there. The way I see it then, any vulnerability introduced as part of this logic is the fault of the developer, and not Bubble.

All that being said, this emphasises the importance of protecting both database (privacy rules) and workflows (server-side conditions) - not setting just one of them.


Thank you Petter,

I would argue that while this apparent lack of constraint around the Bubble unique id (kind of a UUID) seems bad, it is actually a well thought out bit of security engineering. What it boils down to is that on the Bubble architecture side the unique id is protected by the Chicken or Egg Problem. Within the Bubble architecture to discover knowledge of the existence of a unique id requires either having permission to search for a thing, or having a priori knowledge of theunique id to use in a lookup. In this manner the unique id falls within the same security envelop as API tokens: they are secretes you should never divulge, and are prohibitively expensive to guess. In this vein, my recommendation for a security practice is to only every leak unique data through slugs, which can be protected within the privacy settings.

The other way to think about it is: if an adversary has found a means of establishing the existence of one or more unique id from Bubble then they have broken a lot more security measures then just listing the unique id.

By far the greatest security vulnerability in Bubble are the community plugins. The introduction of malicious surveillance code that intercepts data posted to the actions provided in the plugin is a non-trivial possibility. To prevent this we are really reliant on a combination of Bubble reviewing the plugins, and market forces to screen out hostile code.

FWIW, here are the steps I’ve taken to secure my Stripe webhook endpoints.

And here’s a visual that might be helpful.

Everything runs smoothly, and I sleep just fine. :wink: