Hi,
I am working on my SEO, and noticed that the google crawler actually doesnt fetch any of the content of my pages. If you use google search console and use “fetch as googlebot” on a page you see what I mean. None of the content (text/links/etc) is fetched, it just sees all the “other” stuff, like and meta descriptions and so on.
You can also see this by just using “view source” in your browser.
As google says in the article: Why is this important? It is important because search results are based in part on the words that the crawler finds on the page. In other words, if the crawler can’t find your content, it’s not searchable.
Am I missing something basic here, or might this be a big issue for bubble apps currently?
Seems like this is a general issue with javascript websites, and there are solutions like https://prerender.io/ for it.
Would this be a lot of work to implement @emmanuel ?
Ok, can you explain a bit further?
In Google Search console, it seems the content on the pages is not fetched by the crawler. But you say that there is created “static” content that is created and sent to the crawler every time we update a page?
When I google Bubble (site:bubble.is), I see only what might be meta descriptions.
I have a question on this. I have a few pages for which I define the H1, Title etc. on the basis of the parameters supplied to the page. Some of that logic is written in “Conditional” parts of components, and some in “On page load” workflows.
Will these taken care of too while sending pre rendered data to Google?