What is the correct way to disallow a page:
I tried all four, deployed the new version and tested live URL on search console and it says it can be crawled and indexed. Can anyone please tell me what i am missing here?
Did you put a line for
User-agent as well? The bare minimum for a robots.txt file, from my understanding, is:
User-agent: [user-agent name]
Disallow: [URL string not to be crawled]
For a specific page, example is:
yes, I have
since bubble page ends dont have a .html shouldn’t i be able to put just
Disallow: /pagename ?
Yeah, that should work
Does Search Console take a bit of time to update and reflect changes?
I am thinking that might be the case, i will give google a bit time
This topic was automatically closed after 70 days. New replies are no longer allowed.