Over the last few months, I ran a comprehensive security scan across 11,026 Bubble apps.
The results surprised me - and they might surprise you too.
What I found across 10k+ apps:
12.9% of apps were exposing user records via public Data API (4.7M+ records in total).
30% of apps leaked at least one API key or secret (from Sendinblue, OpenAi, Anthroipic etc.).
4,805 apps had public endpoints accepting requests without authentication.
79% of apps had privacy rule gaps in their database → that’s 861M+ records exposed.
86% of apps had their version-test environment wide open.
6,518 apps had client-side exposed keys (Google Maps, Stripe, etc.).
2,779 apps had their entire Data API open without auth (302M+ records exposed).
Most Used Plugins:
1. Toolbox 2. Rich Text Editor 3. Air Copy to clipboard
Most Service Used via API Connector:
OpenAI
Quickbooks
Zerobounce
App with most pages.
One app with 739 pages
One app with 428 pages
One with 362 pages.
Any other stats you want to know, let me know.
—————————————————————————————————————– Why does this matter?
Attackers don’t need to “hack” your app. They just look for open doors:
Unauthenticated endpoints → account takeover or data leaks
Public APIs → scraping user PII in bulk
Exposed API keys → attackers pivot into your connected services
If you’re storing customer data, handling payments, or connecting third-party services, these risks are very real.
One time I was browsing the forum and came across a thread where a developer was asking for help with some integration issues. To my surprise, he had shared a completely open link to the editor of a production app — exposing client data, API keys, and everything else.
The crazy part: this was an app from a company that had just pitched on Shark Tank Brazil with their digital solution for veterinarians. I honestly couldn’t believe it.
I managed to get in touch directly with the company and alerted the owner. Within a few minutes, the post disappeared from the forum.
Well, sometimes the issue isn’t in your app, but in the service you’re integrating with. Many of them provide a webhook subscription but don’t offer any way to authenticate the webhook call. Because of that, you often end up having to keep an endpoint open, just waiting for those incoming requests.
What I started doing in my larger applications is redirecting those webhooks to my VPS. From there, I can securely make a POST request into Bubble. This way I’m able to “lock down” access when dealing with more robust applications.
I think this should be parameterized by the user right when creating a new app. Bubble should set login as the default — and then, if the user wants, they could unlock the development version later… not the other way around.
But most of these service use for dumping the data rather can fetching the data. To fetch the data from bubble db you can use the admin token with bubble api.
Yeah, or atleast show them when they deploy the app.
This may not be an issue. Most “webhook” endpoint are open (think about make, zapier…). The issue is what happen after and what this is used for.
@Jici Yes, most of the time no - but sometimes there are unauthenticated GET endpoints that are publicly accessible. So anyone can see the data.
For example, if you set up the Stripe webhook URL and you are not verifying the incoming subscription ID and are directly updating the DB, then anyone can call that endpoint and upgrade as paid users.
Yes yes… I was referring exactly to these endpoints that are used to receive webhook events, which by default, do not have any authentication mechanism.
“11,000 apps audited” sounds dramatic — until you remember most Bubble users have a pile of abandoned test builds and sandbox projects from years ago. Did your audit filter those out, or were they part of the headline?
How many of those apps were actually live or on paid plans? Scanning inactive learning projects isn’t a security revelation — it’s analytics theatre.
If this was really about helping the community, you’d be sharing best practices, not fear-bait stats. Maybe next time, audit your methodology first.
All are paid apps- not the test URL. That’s not fear-bait (not sure why you think that way). Will share the methodology if more people demand it. If not, then I will not.
Appreciate the clarification — though I’ll be honest, your answer opens up more questions than it resolves.
You mentioned all 11,000+ apps were paid and that you’d share your methodology if asked. I’m asking. How exactly were these apps identified in the first place? Bubble doesn’t publicly list apps (do they?), and many of the serious ones run on custom domains, which makes them indistinguishable from any other hosted site. Did you identify them through certificate logs, DNS scraping, or something else?
If you only analyzed apps still using the bubbleapps.io domain, that’s a very different dataset — one that likely excludes the majority of production-grade builds. And if you somehow included custom-domain apps, that implies an unusual level of visibility into Bubble’s infrastructure or user data that would raise its own privacy concerns.
The point isn’t to nitpick; it’s about transparency. If this was truly a community-minded audit, publish the method so others can verify it. Without that, the claim of “11,026 paid apps audited for security” feels less like research and more like marketing copy.