I asked AI about this…
Is the statement “AI crawlers don’t see Bubble websites because they don’t render JS” true?
- True in the narrow sense: Any crawler that literally fetches only HTML (no JavaScript execution) will not see the dynamically generated content on a Bubble site. Because Bubble defers almost everything to client-side JS, such a crawler ends up with an empty page.
- Not universally true: Some “AI” or “bot” systems do embed a full JS engine (for instance, Googlebot or certain headless-Chrome-based scraping services). Those can execute Bubble’s client-side code and therefore will see the same rendered page that a human visitor sees.
- In practice, most large search engines have caught up and do execute JS. But if you’re relying on a smaller AI data-ingestion pipeline (or building your own “bot” that simply grabs HTML), you will indeed miss all of Bubble’s content.
What Bubble teams usually do (and what you can do) to fix it
- Bubble introduced “SEO-friendly” options (often called “prerendering” or “server-side snapshots”). Essentially, you configure Bubble (or a plug-in) so that when a crawler without JS arrives, Bubble’s servers detect the user agent and return a pre-rendered HTML snapshot instead of the bare JS shell. That way, every crawler—AI-based or not—sees the full content.
- If your Bubble app doesn’t have prerendering enabled, then yes, a no-JS AI crawler will completely miss your content. So it’s arguably “true” today that any plain-HTML fetcher won’t see your Bubble pages. But it’s not an inherent law of AI crawlers—they just need a JS engine. Some do have one, many don’t.