Bubble can simply delete your app or subscription

Sounds like Bubble should turn their automated NSFW detection into some type of Bubble feature that we can turn on in our app and use to notify users of bad conduct, to avoid saving inappropriate data, etc.

We want the same protections as Bubble here and don’t want illegal content in our app, but when trying to make software development accessible to everyone (Bubble’s mission) there will be a large amount of users that don’t know how to, or even that they should, detect this themselves.

5 Likes

I would like to chime in on the comment from @gf_wolfer . Reading this, I am inclined to come up with a similar solution.

The premise of my solution is that we consider A partnership oriented rather than an adversarial relationship between bubble and developers. Most app developers also wants their platform to be safe and within the guidelines that they intend for the application. Bubble has some responsibility and will take some action to protect the overall platform, regardless of what we is out overthink. App developers just do not want to be put in a position of powerlessness and at risk for their entire platform, work, effort, and lively had to be in jeopardy because of the bad behavior of users that they do not necessarily have the power to control.

Therefore, I would suggest that the solution is to have a fixed API available to all developers that allow the bubble platform to detect and inform a bubble app operator of an offending user account. A bubble developer can implement this API and then publish that their app is SFW certified (or whatever) indicating that they’ve implemented this feature.

The feature does not have to be an automatic deletion, but simply , a more effective and more democratic way of bubble alerting an app operator to bad behavior.

2 Likes

I like this idea. I would rather let Bubble audit my database if it means preventing my app from being shut down without warning.

Why? I can boil it down to four letters. WOKE!

I believe if there is an internal operator that basically performs the check for NSFW in the app itself, it gives us greater control as developers to implement the types of functions and features we see fit for ensuring our apps are not falling foul of NSFW. Having it as an API means extra WU costs for sending the data and returning the data and would potentially slow down the platform. Having a built in operator that filters for the NSFW keywords Bubble uses in their scans allows a much more flexible option for developers.

6 Likes

There are a host of elements within this thread that are gravely concerning to me, both as a business owner who has invested hundreds of hours and thousands of dollars in this (meta)platform, and as someone who routinely evangelizes and refers fellow entrepreneurs to it.

  1. Automated deletions of legitimate Bubble apps on paid plans with no apparent prior human intervention, and it would seem not even an email communication to the affected Bubble client/app owner.

  2. Enforcement actions taken against Bubble apps — whether programmatically or manually — based on “Summary” elements of the Terms of Service (and the Acceptable Use Policy [AUP] by reference) that are explicitly denoted by Bubble’s lawyers as having no legal effect, and thus no legal binding on Bubble’s clients or their apps.

  • Of equal importance, as has been aptly pointed out by several different members of the Bubble community in this thread, the “Summary” for “3. USE OF THE PLATFORM” has no grounding in either the terms within that section of the ToS, or in the Acceptable Use Policy incorporated by reference. And yet this same non-binding “Summary” is what Bubble’s team and internal “trust and safety” algorithms seem to both be leaning on in taking compliance actions and making compliance decisions. Very troubling.
  1. Bubble seems to have been caught talking out of both sides of its mouth at the moment when it comes to assuring the privacy of the data stored in apps built on its platform. On the one hand, it has at long last implemented a programmatic safeguard that gives the impression that data stored in a private, deployed Bubble app can only be accessed by Bubble when its client/the app owner has explicitly given permission for said access.

"Grant Bubble data access to troubleshoot your app

Select a value in this dropdown to grant Bubble employees different levels of access to your app’s data, helping them troubleshoot issues for you.

"[…] If the setting is disabled (no permissions):

  1. Bubble employees will not have access to the Data – App data tab and cannot see/edit database records in any of your app’s databases

  2. Bubble employees will still be able to view and edit the app

  3. Bubble employees will be able to see any app data exposed in app preview or the deployed, live app*

  4. Bubble employees will be able to see data that is exposed in the logs

Only app admins will have the authority to change this setting. Bubble employees do not have access to change it."**

  • On the other hand, we find out that Bubble is automatically scanning the private data stored within its apps, something that at a bare minimum will need to be disclosed by Bubble clients to their own app users, especially any users based in the EU.

*It is at best unclear what qualifies as “exposed” app data, and whether this somehow includes querying private data in apps simply because they have been deployed.

**Conspicuously absent from the Manual entry above, is any mention of Bubble’s internal “trust and safety” algorithms.

  1. Even antivirus software quarantines potentially malicious files for review when they are detected; it doesn’t immediately delete them with no prior notice or ability to review whether or not a file is — in fact — harmful. There are a myriad of other ways Bubble could be handling this if it continues to auto-surveil the private data held within its clients’ apps, any of which would be far superior to what has been outlined in this thread as its current practice.
  • My recommendation — IF this auto-surveillance continues, which I believe may well violate privacy laws, or at a minimum, Bubble’s own legal agreements with its clients — would be to take a similar approach to antivirus software, effectively quarantining an app by presenting an innocuous and nondescript message to any app users who try and access it that the app is undergoing some unannounced internal review and critical maintenance and immediately notifying the app owner (Bubble’s client) and allowing them to respond to the purported issue, which would be reviewed by a qualified, legally-versed Bubble staff member against Bubble’s actual legal agreements with its clients, including its ToS and DAU that Bubble clients agree to abide by.
  1. I am alarmed with the potential implications of this auto-surveillance for the app I am currently working on, which you could crudely characterize as a specialized LinkedIn for a closed business network. I do not administer the network — that is, I do not decide who gets admitted and who gets rejected. However, they do have published criteria as to what they consider when a prospective member applies to join, which are primarily revenue-based/based on the size of a business.
  • Both the CEOs of OnlyFans and PornHub would technically qualify based on these criteria, were they to ever apply in the future to be members of this organization. If accepted, they would add their profile similarly to LinkedIn, with the name of their business (both presumably “NSFW” terms in the context of this thread, and auto-surveillance activities of both published and private app data by Bubble) and a link to their respective business websites.

  • It would require quite the stretch of the imagination — and interpretation of Bubble’s ToS and DAU, even if they did have any actually legally binding terms precluding the promotion of (consensual) pornography, which they currently do not — to claim that my app is promoting pornography, and yet after this thread I am concerned that the possible growth of this business network (which is about to take off exponentially in the U.S.) could put the very existence of my app in jeopardy. Nothing I have read here is reassuring to the contrary, even in this, the most “vanilla” of business application contexts.

  1. To give a less “vanilla” category of business application contexts as an example further illustrating the issues at play here, is it now reasonable to assume that anyone looking to build a dating app should not attempt to do so on Bubble? Any DM-type features will most certainly involve “spicy” :hot_pepper: chat between users of said apps, however defined within the client communities targetted (limited only by the imagination of the app developer, given the highly specialized dating/matchmaking apps that exist today).

  2. I have spent thousands of dollars to retain legacy Bubble plans following the change in pricing and business model announced last year. After over a year of holding on to these plans, it became evident that Bubble had omitted an important asterisk from its announcements and related communications to its client base:

*We are going to paygate all of the promised platform improvements developed over the next 18 months to the new pricing plans, with the exception of UI-related improvements because it is too annoying and burdensome to maintain 2 separate UIs for the platform.

  • Had Bubble been upfront about this, I would never have retained those plans. I find this at best dubious and at worst manipulative and unethical. My trust and faith in Bubble as an organization has been hugely undermined based on this experience; one that I am sure anyone else who retained legacy plans at the short-notice forced business decision to retain/acquire them or lose them forever by 2023-05-01, can empathize and identify with.
  1. It is at best unclear as to whether this is a sin of omission or a sin of commission when it comes to the paygating of promised platform improvements, but given that Bubble only recently (begrudgingly, or graciously, depending on whether you ask Bubble or its clients I suspect :thinking:) extended bulk data operations to its Agency plans, I lean towards the latter in this instance.

  2. It seems there may be another missing asterisk at the end of Bubble’s mission statement. If that is the case, the Bubble community deserves to know this sooner than later, and no less than absolute transparency and clarity as to what types of apps can be built on Bubble, and which cannot.

“We aim to build the best platform that empowers our users to create powerful web apps* without writing code.”

*Excluding certain categories of web apps as outlined on a similar page to Stripe’s “Prohibited and Restricted Businesses” page ?

As others have noted in this thread, this would seem to run contrary to Bubble’s stated mission and the current ToS and AUP, but with enforcement actions like the one that precipitated this thread, it seems Bubble may be heading in that direction and either a) drifting from its Mission or b) redefining its Mission in real time.

Is this a sin of omission, or a sin of commission? Time — and Bubble’s further response or non-response to the communal concerns raised herein — will tell.

  1. Contrasted with the hardware and software issues that characterized the platform instability earlier this year, these are “firmware” issues with Bubble’s current governance, policy framework and application (or misapplication) thereof by Bubble’s staff and internal “trust and safety” algorithms. These too are stability issues, calling the trustworthiness of Bubble as a reliable and predictable platform to build on, into question. They should be treated with no less seriousness, and a corresponding allocation of resources by your organization.
  • Now that the technical stability issues have sufficiently ‘stabilized’, it seems both timely and prudent for Bubble to invest in and commit to an independent review of its governance and policy framework to address its non-technical stability issues.

  • While whomever you choose for this should be highly reputable with a solid body of work over many years supplementing their credentials, I recommend you consider Ontario’s former Information & Privacy Commissioner, Dr. Ann Cavoukian for this. I have no connection to her whatsoever; I simply respect and admire some of the important work she has done to help organizations improve their governance and accountability, including with her Privacy by Design (PbD) framework, which has since become an international standard that organizations can certify against.

Though I can only speak for myself, the flippant handling thus far of the well-founded communal concerns raised within this thread by Bubble’s head of trust and safety — including marking this thread as “Resolved” to inhibit further discussion :roll_eyes: — has not bolstered my trust; it has further eroded it. After a well-deserved long weekend for many of Bubble’s team, I hope that we receive as thorough, thoughtful and transparent a response to these non-technical stability concerns as the technical stability concerns from earlier this year did from the highest levels of the organization, including an ongoing commitment to equally transparent communication, dialogue and updates on their resolution.

13 Likes

Why is there no response / reassurance / clarification from leadership about this?

2 Likes

Pretty sure it was holiday season when this topic blew up. You can refer to the previous posts.

1 Like

Good summary. You’ve summarised it really well

Bottom line is bubble is not suitable for enterprises. There is a lack of understanding of data protection and information security.

If they are scanning apps for so called “bad words” there should be strict technical and procedural controls in place. The full explanation should have come a lot earlier with links to “public versions” of these policies/procedures.

Also scanning for key words and using that to pause or delete apps is bad practice. Any bad word can be used in blog posts, articles, journalistic reports for legitimate purposes, and this is not against the law. It’s not illegal to talk or write about it…

2 Likes

The resolved tag is added by OP when OP marks a reply as the solution by the way.

2 Likes

I think in some cases Bubble forum moderators have an ability to mark a thread as solved that doesn’t require the OP to do so. I noticed that as soon as Laura had replied it was marked as solved and my suspicion on that was she may have marked it as solved. @arthuribeiro did you mark the thread as solved?

5 Likes

even if @arthuribeiro marked this as ‘solved’, considering his replies in this thread, I think that he’s trying to avoid rocking the boat too much.

If I had my app deleted in the way he did, and with vague support barely got it back, I might be a little jumpy myself.

3 Likes

This thread started on June 30th. Today is Tuesday, July 9th.

This not being at the top of priority is concerning, considering this thread is full of bubblers who all pay $$$ per month for this platform.

I could go on and on and on about how much time we’ve all invested into this platform, but the bottom line is $$$.

Not taking this thread seriously is just bad business.

6 Likes

I’m pretty sure its also because @laura.oppenheimer already spoke about it officially and she already said that it’d be her last post about this topic without revealing things Bubble does in terms of Trust + Safety.

3 Likes

@laura.oppenheimer response told us nothing, it’s a typical we can’t discuss go away response. As user of the platform our concerns are not addressed.

  1. Is Bubble reading private data? If this is the case, please explain how and who is reading private data because business do not want their client data exposed so easily.

  2. Will Bubble notify the app owner of a problem and provide reasonable timeline to address the issue or will Bubble continue to auto shutdown apps if they find offending words / material in the database? They may say well we found 10K records of offending stuff so we have to shut it down automatically, but we could have been spammed with 10k of records and not know in time to fix it. We need to understand the process Bubble goes through when offending material are found.

  3. What tools will Bubble make available to help us protect our app fron being bomb with offending material by a bad actor?

Bubble asked us to trust our entire business with it, it has the power to shut down a business in a second with no transparency around how it will apply that power. Scary thought if you’re thinking about using Bubble as a long term solution and not just an MVP.

3 Likes

I guess we now know what will be one of the most anticipated questions of Bubblecon :smirk:

3 Likes

Too much of this goes on here. We can’t talk or explain as it will negatively impact safety and security. A lot of BS. Can’t explain, can’t provide evidence doesn’t stack up.

We need to think differently, when we host other people’s “client” data on bubble, we are accountable for its confidentiality and availability while it resides in this platform. They “clients” are placing trust in us.

2 Likes

Hi all,

Jayvee from the Community team here. The issue raised here has been addressed and the OP marked the thread as resolved — the Bubble team is not censoring this thread or preventing community members from responding. That said, I want to step in and provide some clarity while also protecting details related to the OP’s account and ensuring that we do not compromise the platform’s integrity.

Firstly, Bubble does not delete apps.

The team has built tools to help identify recurring patterns of abuse and alert us to misuse on the platform. Some of these patterns may include keywords that overlap with legitimate uses. We’ve encountered a number of cases where users have created malicious apps in large quantities. In cases like these, bad actors often employ advanced methods of theft, which means paid plans are sometimes included in the review pipeline.

Scanning apps for content that violates our Terms of Service and Acceptable Use Policy is critical for protecting the larger platform and overall community.

While Bubble doesn’t specifically flag all “not safe for work” (NSFW) content, we do flag content that may be considered NSFW in response to past and ongoing instances of abuse. This includes, for example, phishing apps, fraudulent or misleading apps, or other schemes.

To ensure the safety of the Bubble platform, we use a set of criteria which includes multiple signals of potential abuse to determine what apps will be reviewed. Of those, a very small percentage will actually be flagged. Our systems are designed to keep our platform and users safe and while we do our best, we don’t always get it right.

We do not plan on further engaging in this thread, so if you encounter any instances of abuse or believe content in your app has been flagged incorrectly, please reach out via our Support center. We appreciate your understanding and support.

11 Likes

@jayvee.nava We understand that, but you block app.

Again, this thread will continue until you answer theses questions:

And how we should deal/consider this important point:
Bubble is reading private data.

6 Likes

I know you said no one plans to continue in this discussion, but I have a follow up to this point, and I hope you can provide clarity.

Is this process an automated process or manual? By that, I mean, when it’s time to take action (block the app’s domain, or whatever enforcement action is deemed appropriate) is it a person that reviews the automatic flags and decides that “Yes, this is not allowed on our platform, take it down” - Or is this an automated process that is a “better safe than sorry” and takes immediate action after a certain threshold, and lives by “better ask for forgiveness than permission” mantra?

1 Like