[FREE] New AI Privacy Rules Checker by Flusk 🤯

Thank you so much @antoinechiro, really appreciate your feedback! :heart:

I think it’s normal that the ecosystem has different points of view, and that’s also what makes it so rich!

Hey @mikeloc!
Thanks a lot for putting so much effort into telling your opinion about the current situation.

Yes, it did change. We made a mistake saying this (is was false!), and we modified the initial thread.

As you said, we know it’s controversial. And as @vnihoul77 said, we think it’s the best way to really make the ecosystem more secure. Even if it doesn’t look like to you at the moment.
We’ll see at the end.

Thanks for the kind words you said, and I wanted to tell you that if the form of what you’ve said isn’t the best thing to hear, the content is relevant and we internally thought a lot about the method we used and if we were right or wrong to go that way.

I think it does. Please continue to share your opinion, this is part of what makes this community so powerful.
I think that exchanges like those can only move us forward, and never backward.

2 Likes

Securing privacy in our apps is crucial.
Privacy rules are often neglected by devs and even clients (when they see the app working, that’s good enough).
This looks very promising, I will try it ASAP.

So I did a free scan on my app and found some fields were unintentionally exposed (mostly from deprecated data types). I changed the privacy rules so they are not public, but after deploying the update and doing a new scan, some of the data types are still appearing as leaked even though there should be no way that they are now. Any ideas on why this might be happening?

1 Like

Hi @Benjamin_Rodgers
Sometimes the results might be cached for up to 15 minutes. Can you make sure to try again after this delay and in a fresh tab?

Otherwise, feel free to share your Privacy Rules settings here so we can help you troubleshoot them :slight_smile:

It seems that it must be cached for much longer than 15 minutes because I have since deleted some of the old data types (after scanning yesterday) instead of changing the privacy rules, and they are still appearing when I just scanned. There is also a message displayed as ā€œPrediction cached from similar previous test.ā€

Well-respected!? You just made my month :blush:

Adobe, a $158B company, made the effort to release a ā€œDo not trainā€ option for their customers to opt out of having their creative outputs used to train Adobe AIs. If wanting Bubble to do the same is crazy, then sure - I’ve lost my mind :crazy_face:

But we’re here to talk about Flusk - I think your initial alarm but final conclusion are correct, Mike. I was thinking of making a Flusk like service as far back as 2021. @petter of Amlie Solutions was thinking of doing the same even earlier… still says coming soon on his website:

Point is it was inevitable someone was going to create this. But it was just as inevitable malicious actors would systematically exploit unhardened Bubble apps to make a buck. A guy named Ryan Kulp has been going around emailing the owners of unsecured apps with a scary message:

I’ve got a few problems with this email:

  1. Like your initial reaction to @wesleywsls’s post, this email makes it seems like the only way out of one’s security issue is to pay this random stranger to fix it: either $200 per hour or by buying his $49 book.
  2. He writes out the email addresses (without redaction) of two of the latest signups to your app.
  3. He’s lying or ignorant - says he’s written the ā€œfirst and only Bubble Security Guide.ā€ To make a claim like this suggests he would have researched alternatives… in which case Ryan surely would have found @petter’s excellent book.
  4. The only letters he capitalizes are his name - like a psycho.

Having a free scan available, as Flusk later clarified, is significantly better than what this fleece job is offering. Also better, Flusk redacts sensitive information from the scan (so they say, I haven’t tried their solution).

I think Flusk, in general, will be a net positive for the community because - again - our apps are already being scanned by shadier actors.

My only question is, how does Flusk ensure that I actually own the app I’ve claimed is mine (to prevent it from being scanned by the public)? I’m assuming you require we create an option set or add something to our HTML that proves app ownership?

3 Likes

Who said I was talking about you, @zelus_pudding? :wink:

Yeah, sorry about the ā€œlost his mindā€ thing. I should have said you were quite passionate about the subject, much like me in this thread, perhaps.

They will answer your question, of course, but that was my whole point. They have gone the controversial route of letting anyone scan any app, regardless of ownership, and the only thing that ā€œstopsā€ someone from scanning an app they don’t own is that it is against the terms of use of the tool (and as I said, the latter wasn’t actually part of the original post).

When I read the original post, you immediately came to mind because of how strongly you expressed your opinion about the AI stuff, and given your opinion on the matter, I figured you would really dislike what is going on here. I could easily be wrong, of course, but if one of your apps had been included in the concerning study they did, I would think that would be exactly what you were speaking out against in your other thread.

Oh, and about the email you shared, there really are no words on that one, and I guess that is what Flusk is all about here… trying to stop people like that guy from taking advantage of Bubblers, and again, I respect the hell out of that.

That’s a very solid point. I guess my divergence on this topic is Flusk can only observe an app from the public’s point of view whereas Bubble - on the topic of training AIs - has complete access to every component of our apps - bits that prior to the AI conversation were beyond the reach of anyone other than an app’s developer to monetize.

But back to Flusk. I did just try the app. I think it’s really cool. I see they have two solid ways of validating app ownership: you can add a special email address they provide as a collaborator to your app OR you can temporarily create a specific page in version-test. I opted for the latter. Then I tried the privacy rules checker that started this whole convo.

They posted the url earlier but here’s a shorthand… to check any url, pop it in after where it says ?url= below

https://app.flusk.eu/privacy_rules_checker?url=app.flusk.eu

Flusk has censored their own app from review but - as was promised - any app that hasn’t been blacklisted is going to generate a report. Now that I’ve actually gone through the motions, seeing exactly how easy it is to make this report has me second guessing how comfortable I am with it.

Considerations to hold in the balance:

  • There are free tools like this for traditional code. In the first 30 minutes I’d discovered Trufflehog, I was able to use it to find exposed API keys leaked in a popular github repo.
  • Amazon constantly patrols all of Github so if one of their API keys is ever detected in a public repo, in addition to notifying the API key owner, Amazon will auto ā€œquarantineā€ the keys themselves to dramatically restrict what services can be used.

Those are two different approaches. Flusk became Trufflehog because Bubble failed to be Amazon. Is it eerie that anyone can see your app’s privacy rules come up red - yes. But maybe that’s the finer point;

That 89 of the top 100 apps were caught with their pants down - even after @josh has worked on Bubble for 11 years - shows Bubble isn’t proactively doing our security enough favors.

I respect the passion, Mike!

1 Like

Actually had a blast laughing at that :laughing:

This is also how we started Flusk back then. When we published our 2023 Security Study we noticed that even providing companies with strong evidence, it was very hard to convince people to fix vulnerabilities, even though we were ready to fix it for them for free.
And here I’m talking about million-dollar companies that were facing serious data leaks.
So I can imagine that the shady approach of selling the book doesn’t help here.

Though it makes me wonder if we should not run a big email campaign on all Bubble apps that have data leaks and send them the tool @wesleywsls

Totally agree and had the same concerns. But to be honest, the difference between what’s public and private from your app is very small.
So at most, they’ll have access to API Tokens, nothing more, which will be easy to remove from the training data I guess.
To add more context, here are the security points we’re able to check without any access to your app, so I let you imagine how much is public (which essentially is not a problem)

Interesting :thinking:
Did you push the changes live? Because the tool only check the live environment on its free version!

Again, feel free to share your app ID or URL here or in the Intercom chat so we can investigate further :rocket:

I think there’s a way you can go about proactively emailing folks that doesn’t come off as a shake down. Not that I have perfect insight on how to do that - this is such a sensitive subject that some will be glad you reached out and others upset you had the audacity to scan their apps. Here are some thoughts:

  1. I wouldn’t reveal any ā€œcompromisedā€ data in the email itself. There’s no telling exactly who’s getting your emails and so there’s a chance someone reads something they maybe shouldn’t have - even if it’s a minor thing.
  2. Introduce yourself starting with links pointing back to authoritative content you’ve already published on the forum. This way folks can easily confirm you’re not some rando but rather an established (and trusted) Bubble community member.
  3. Do not merely make your offer a pay me or pay me option. Lead with the fact you have free resources for them to use to check what’s leaked for themselves. This way, again, it doesn’t feel like a shakedown / hostage situation.
  4. As you know, people are going to be embarrassed they got caught with their pants down. Reaffirm that, while this is serious, it’s unfortunately more common than people would think. Point to your study showing how 89 of the top 100 apps had similar issues and emphasize that - really - it’s your mission to make the Bubble community a safer, more secure place.
  5. As much as possible, your message should not read as if their misfortune is your golden opportunity (even though it is).
  6. Keep it short.

Good luck!

1 Like

Great advices :rocket: Thanks!

1 Like

Yes, I’m certain that I published the changes to the live environment. I scanned it again today, and the deleted fields are gone, while the AI predict message now says ā€œBased on current results.ā€ I’m not sure where the Intercom chat is, but here are some photos of fields with obvious privacy rules (currently active on live version) that should prevent them from being exposed:


Latest scan of the same datatype:
Capture2

Hey @Benjamin_Rodgers!

I might have a clue why the checker finds these data.
When I look at your second privacy rules (the one based on the Company), it might be possible that some ā€œSign Typeā€ don’t have a ā€œCompanyā€ filled.
This way, as the checker checks as an unlogged user, the ā€œCurrent User Companyā€ is also empty.

So, empty == empty and this might be why it goes through this rule.

To fix this, you could add a conditional and make it like this: ā€œCurrent User’s Company is This Sign Type’s Company and Current User’s Company isn’t empty and This Sign’s Company isn’t emptyā€

Could you try again with this privacy rule?

1 Like

Hey, that worked! It makes a lot of sense in retrospect, but it just goes to show how challenging Bubble has made it to debug situations like this, especially since I’ve used the same logic on many data types (meaning they weren’t secure). Anyway, thanks for your tool. My database is now much more secure :blush:

You’re welcome! Glad it helped and you were able to troubleshoot the issue!
That’s actually a very common mistake we see among apps, we’ll add it to the book as well! :rocket:

1 Like

Hey team Flusk! Great app - this tool has made it very easy to identify vulnerabilities. I imagine very large apps will benefit the most from this as - with their size - it’s much easier to accidentally create a condition that leads to a security issue.

That said, I do have a question. You have an alert setup for something called Visible URL in API call. The issue description for this is:

The URL of this call might be sensitive. As a precaution, it is recommended that the URL be concealed or, if deemed appropriate, this issue may be disregarded. For instructions on how to conceal an API call URL, please refer to the corresponding section of the help documentation.

When I try to click on the documentation link for this issue, the link doesn’t work. Can you point me to your docs on what this is / how to fix it? Thank you,

Thanks for pointing out the issue.
Can you provide me with details about which link exactly is not working?

Here’s the documentation of the issue:

I really like this one as we’re using some custom AI models and GPT for more accurate results :yum:

Just a quick point - if you’re using GPT by sending the potentially sensitive data to GPT API to classify it, that may create a privacy problem as OpenAI retains that data.