Why your backend workflows might be your app's biggest vulnerability

I promise it’s true.

By just executing msearch directly, with whatever constraints there are…

It’s hard to understate just how important this is.

  • a user, without exception, can access ANY data that their privacy rules permit access to
  • even on a test app with no elements/pages/workflows, you can still execute a search directly with whatever constraints you want

I’d suggest you try the data explorer @ https://secure.notquiteunicorns.xyz with one of your apps to see what different user roles can see, because if you’ve been assuming that constraints are protecting your data, you’ll get an awkward surprise :sweat_smile: That’ll do exactly what you’re asking for which is show you how that data is public.

How does a user execute an msearch directly with whatever constraints there are?

From developer tools… Create a test app, don’t send me the editor, have a publicly visible data type with a secret word in the DB and I’ll find it and show you, even without ever touching/displaying that data on a page/workflow.

Your suggestion that constraints protect data is just incorrect so I’m happy to show you otherwise because it’s a critical issue in most Bubble apps and it’s a common misconception.

Manual entry says this too:

Privacy rules are an essential part of your app’s security. Any database data that is private or sensitive needs to be protected with Privacy Rules to be considered secure.

2 Likes

Did you get a chance to try that out @boston85719 ?

@ZubairLK I didn’t mention - you just need to append the Authorization = Bearer <token from backend workflow> to any network request to Bubble

1 Like

@georgecollier curious for the actual answer too. Assume it’s calling the API directly bcos the endpoint is still public/ data isn’t secured with privacy rules.

OT/ but on trend- caveat that I’m still learning how to use APIs- and haven’t been able to find a solid answer to this yet; where you can’t utilise privacy rules with Bubble logic [EX: one to many relationships + w/o using lists]. Is it safe to secure a data type with privacy rules then use an internal API call [with secure authorisations + ignoring privacy rules] to get the data?

The request payload in msearch looks to be either encoded or encrypted, presumably you have a method of decoding the payload and understanding how msearch is called?

Yeah, I learned it from @NoCodePete who is a very smart guy and also building https://nocodefusion.com/ which you should check out

3 Likes

Another one that I see sometimes is for endpoint/webhooks, usually double checking an external payment like Stripe.

I found instances where they were detecting data to that workflow while ignoring privacy rules, exposed as public and lacking any auth.

To correct this:

  1. Start by adding a secret parameter on Bubble’s webhook URL to Stripe, then check it on the backend workflow (i.e https://app-name.com/api/1.1/wf/webhook-name?key=secret_key).
  2. If WU is not a problem, as @dorilama said, you can use the webhook as a trigger, and only then make an API call to stripe to validate it, basically making a “tripple check”.
1 Like

Thanks George :slightly_smiling_face:

2 Likes

this will leak your access token in stripe logs

you don’t have access to the raw response data in bubble and the verification fails unless you massage the data to make it work (which negates the meaning of the procedure)

This has been discussed already multiple times in the forum with different people drawing different conclusions.

3 Likes

I just realized that maybe Bubble’s bug bounty program is acquisitions.

5 Likes

Thanks for the heads up. I’ll look up those discussions. It seems that last solution I sent really isn’t possible on bubble.

The first one is still good enough for me though. Not bulletproof, but better than not having it. I’ll Def look up the topic because I don’t trust this that solution by itself and I do need the webhook on a project :sob:

you can always treat the webhook as a trigger and make an api request to stripe to get data securely. add an ip check as the condition of the webhhok to check it’s originating from known stripe servers.

4 Likes

That’s a great idea, @dorilama! I’ll update my original comment there.

I just realized I’m doing something very similar hahah I’m using the endpoint to add a record to the database, and if the user has it, the signup flow uses that record ID to make an API call to Stripe. I made another redundancy without realizing it :stuck_out_tongue:

1 Like

Extremely hot take but you also don’t need to use webhooks for Stripe. Especially true if you’re not in a domain with a high dispute rate. “But what if the scheduled workflow fails” well webhooks fail occasionally and don’t come in at the right time either. The API very robust and you can handle everything on a scheduled basis and correct any rare discrepancies/errors with a simple reconciliation SWAL.

1 Like

No, I trust you mate. I had no idea it was possible to run the msearch directly in the console. trying to figure out how to do this but the x, y and z are tough nuts to crack.

and yet the code that generates that is available to read in your browser :upside_down_face:
it seems like a classical example of obfuscation: annoying enough to make it not worth to the majority of bad parties, funny enough to read out of curiosity :slight_smile:

1 Like

Not to mention accessed from anywhere. There are no “origin” restrictions on the API calls.

– Good work @georgecollier. It has a nice, solid look & feel to it, not to mention provides good incentive for users to think twice about how they’re building their data eccentric essential applications.

2 Likes

Are we sure it’s even meant to be obfuscated? Multiple people on this forum have figured it out, I don’t think it’s particularly difficult if that’s the case. It could just be a byproduct of how they’ve set it up internally.

Yeah, if the search API calls weren’t obfuscated it’d be way worse, but none of this is new knowledge. It’s clearly documented that privacy rules are the way you restrict data access.