jalex
April 13, 2019, 3:43pm
1
What is the correct way to disallow a page:
is it
Disallow: /pagename
Disallow: /pagename/
Disallow: pagename
Disallow: /https://sitename/pagename
I tried all four, deployed the new version and tested live URL on search console and it says it can be crawled and indexed. Can anyone please tell me what i am missing here?
yla
April 13, 2019, 3:55pm
2
Did you put a line for User-agent
as well? The bare minimum for a robots.txt file, from my understanding, is:
User-agent: [user-agent name]
Disallow: [URL string not to be crawled]
For a specific page, example is:
User-agent: *
Disallow: /example-subfolder/blocked-page.html
Reference: https://moz.com/learn/seo/robotstxt
jalex
April 13, 2019, 4:02pm
3
yes, I have
User-agent: *
since bubble page ends dont have a .html shouldn’t i be able to put just
Disallow: /pagename ?
yla
April 13, 2019, 4:05pm
4
Yeah, that should work
User-agent: *
Disallow: /pagename
Does Search Console take a bit of time to update and reflect changes?
jalex
April 13, 2019, 4:09pm
5
I am thinking that might be the case, i will give google a bit time
yla
April 13, 2019, 4:09pm
6
system
Closed
June 22, 2019, 3:43pm
7
This topic was automatically closed after 70 days. New replies are no longer allowed.