Thank you so much @antoinechiro, really appreciate your feedback! ![]()
I think itās normal that the ecosystem has different points of view, and thatās also what makes it so rich!
Thank you so much @antoinechiro, really appreciate your feedback! ![]()
I think itās normal that the ecosystem has different points of view, and thatās also what makes it so rich!
Hey @mikeloc!
Thanks a lot for putting so much effort into telling your opinion about the current situation.
Yes, it did change. We made a mistake saying this (is was false!), and we modified the initial thread.
As you said, we know itās controversial. And as @vnihoul77 said, we think itās the best way to really make the ecosystem more secure. Even if it doesnāt look like to you at the moment.
Weāll see at the end.
Thanks for the kind words you said, and I wanted to tell you that if the form of what youāve said isnāt the best thing to hear, the content is relevant and we internally thought a lot about the method we used and if we were right or wrong to go that way.
I think it does. Please continue to share your opinion, this is part of what makes this community so powerful.
I think that exchanges like those can only move us forward, and never backward.
Securing privacy in our apps is crucial.
Privacy rules are often neglected by devs and even clients (when they see the app working, thatās good enough).
This looks very promising, I will try it ASAP.
So I did a free scan on my app and found some fields were unintentionally exposed (mostly from deprecated data types). I changed the privacy rules so they are not public, but after deploying the update and doing a new scan, some of the data types are still appearing as leaked even though there should be no way that they are now. Any ideas on why this might be happening?
Hi @Benjamin_Rodgers
Sometimes the results might be cached for up to 15 minutes. Can you make sure to try again after this delay and in a fresh tab?
Otherwise, feel free to share your Privacy Rules settings here so we can help you troubleshoot them 
It seems that it must be cached for much longer than 15 minutes because I have since deleted some of the old data types (after scanning yesterday) instead of changing the privacy rules, and they are still appearing when I just scanned. There is also a message displayed as āPrediction cached from similar previous test.ā
Well-respected!? You just made my month ![]()
Adobe, a $158B company, made the effort to release a āDo not trainā option for their customers to opt out of having their creative outputs used to train Adobe AIs. If wanting Bubble to do the same is crazy, then sure - Iāve lost my mind ![]()
But weāre here to talk about Flusk - I think your initial alarm but final conclusion are correct, Mike. I was thinking of making a Flusk like service as far back as 2021. @petter of Amlie Solutions was thinking of doing the same even earlier⦠still says coming soon on his website:
Point is it was inevitable someone was going to create this. But it was just as inevitable malicious actors would systematically exploit unhardened Bubble apps to make a buck. A guy named Ryan Kulp has been going around emailing the owners of unsecured apps with a scary message:
Iāve got a few problems with this email:
Having a free scan available, as Flusk later clarified, is significantly better than what this fleece job is offering. Also better, Flusk redacts sensitive information from the scan (so they say, I havenāt tried their solution).
I think Flusk, in general, will be a net positive for the community because - again - our apps are already being scanned by shadier actors.
My only question is, how does Flusk ensure that I actually own the app Iāve claimed is mine (to prevent it from being scanned by the public)? Iām assuming you require we create an option set or add something to our HTML that proves app ownership?
Who said I was talking about you, @zelus_pudding? ![]()
Yeah, sorry about the ālost his mindā thing. I should have said you were quite passionate about the subject, much like me in this thread, perhaps.
They will answer your question, of course, but that was my whole point. They have gone the controversial route of letting anyone scan any app, regardless of ownership, and the only thing that āstopsā someone from scanning an app they donāt own is that it is against the terms of use of the tool (and as I said, the latter wasnāt actually part of the original post).
When I read the original post, you immediately came to mind because of how strongly you expressed your opinion about the AI stuff, and given your opinion on the matter, I figured you would really dislike what is going on here. I could easily be wrong, of course, but if one of your apps had been included in the concerning study they did, I would think that would be exactly what you were speaking out against in your other thread.
Oh, and about the email you shared, there really are no words on that one, and I guess that is what Flusk is all about here⦠trying to stop people like that guy from taking advantage of Bubblers, and again, I respect the hell out of that.
Thatās a very solid point. I guess my divergence on this topic is Flusk can only observe an app from the publicās point of view whereas Bubble - on the topic of training AIs - has complete access to every component of our apps - bits that prior to the AI conversation were beyond the reach of anyone other than an appās developer to monetize.
But back to Flusk. I did just try the app. I think itās really cool. I see they have two solid ways of validating app ownership: you can add a special email address they provide as a collaborator to your app OR you can temporarily create a specific page in version-test. I opted for the latter. Then I tried the privacy rules checker that started this whole convo.
They posted the url earlier but hereās a shorthand⦠to check any url, pop it in after where it says ?url= below
Flusk has censored their own app from review but - as was promised - any app that hasnāt been blacklisted is going to generate a report. Now that Iāve actually gone through the motions, seeing exactly how easy it is to make this report has me second guessing how comfortable I am with it.
Considerations to hold in the balance:
Those are two different approaches. Flusk became Trufflehog because Bubble failed to be Amazon. Is it eerie that anyone can see your appās privacy rules come up red - yes. But maybe thatās the finer point;
That 89 of the top 100 apps were caught with their pants down - even after @josh has worked on Bubble for 11 years - shows Bubble isnāt proactively doing our security enough favors.
I respect the passion, Mike!
Actually had a blast laughing at that ![]()
This is also how we started Flusk back then. When we published our 2023 Security Study we noticed that even providing companies with strong evidence, it was very hard to convince people to fix vulnerabilities, even though we were ready to fix it for them for free.
And here Iām talking about million-dollar companies that were facing serious data leaks.
So I can imagine that the shady approach of selling the book doesnāt help here.
Though it makes me wonder if we should not run a big email campaign on all Bubble apps that have data leaks and send them the tool @wesleywsls
Totally agree and had the same concerns. But to be honest, the difference between whatās public and private from your app is very small.
So at most, theyāll have access to API Tokens, nothing more, which will be easy to remove from the training data I guess.
To add more context, here are the security points weāre able to check without any access to your app, so I let you imagine how much is public (which essentially is not a problem)
Interesting 
Did you push the changes live? Because the tool only check the live environment on its free version!
Again, feel free to share your app ID or URL here or in the Intercom chat so we can investigate further 
I think thereās a way you can go about proactively emailing folks that doesnāt come off as a shake down. Not that I have perfect insight on how to do that - this is such a sensitive subject that some will be glad you reached out and others upset you had the audacity to scan their apps. Here are some thoughts:
pay me or pay me option. Lead with the fact you have free resources for them to use to check whatās leaked for themselves. This way, again, it doesnāt feel like a shakedown / hostage situation.Good luck!
Great advices
Thanks!
Yes, Iām certain that I published the changes to the live environment. I scanned it again today, and the deleted fields are gone, while the AI predict message now says āBased on current results.ā Iām not sure where the Intercom chat is, but here are some photos of fields with obvious privacy rules (currently active on live version) that should prevent them from being exposed:
Hey @Benjamin_Rodgers!
I might have a clue why the checker finds these data.
When I look at your second privacy rules (the one based on the Company), it might be possible that some āSign Typeā donāt have a āCompanyā filled.
This way, as the checker checks as an unlogged user, the āCurrent User Companyā is also empty.
So, empty == empty and this might be why it goes through this rule.
To fix this, you could add a conditional and make it like this: āCurrent Userās Company is This Sign Typeās Company and Current Userās Company isnāt empty and This Signās Company isnāt emptyā
Could you try again with this privacy rule?
Hey, that worked! It makes a lot of sense in retrospect, but it just goes to show how challenging Bubble has made it to debug situations like this, especially since Iāve used the same logic on many data types (meaning they werenāt secure). Anyway, thanks for your tool. My database is now much more secure 
Youāre welcome! Glad it helped and you were able to troubleshoot the issue!
Thatās actually a very common mistake we see among apps, weāll add it to the book as well! 
Hey team Flusk! Great app - this tool has made it very easy to identify vulnerabilities. I imagine very large apps will benefit the most from this as - with their size - itās much easier to accidentally create a condition that leads to a security issue.
That said, I do have a question. You have an alert setup for something called Visible URL in API call. The issue description for this is:
The URL of this call might be sensitive. As a precaution, it is recommended that the URL be concealed or, if deemed appropriate, this issue may be disregarded. For instructions on how to conceal an API call URL, please refer to the corresponding section of the help documentation.
When I try to click on the documentation link for this issue, the link doesnāt work. Can you point me to your docs on what this is / how to fix it? Thank you,
Thanks for pointing out the issue.
Can you provide me with details about which link exactly is not working?
Hereās the documentation of the issue:
I really like this one as weāre using some custom AI models and GPT for more accurate results ![]()
Just a quick point - if youāre using GPT by sending the potentially sensitive data to GPT API to classify it, that may create a privacy problem as OpenAI retains that data.