How do you disallow a specific page in robot.txt

What is the correct way to disallow a page:

is it
Disallow: /pagename
Disallow: /pagename/
Disallow: pagename
Disallow: /https://sitename/pagename

I tried all four, deployed the new version and tested live URL on search console and it says it can be crawled and indexed. Can anyone please tell me what i am missing here?

Did you put a line for User-agent as well? The bare minimum for a robots.txt file, from my understanding, is:

User-agent: [user-agent name]
Disallow: [URL string not to be crawled]

For a specific page, example is:

User-agent: *
Disallow: /example-subfolder/blocked-page.html


yes, I have
User-agent: *

since bubble page ends dont have a .html shouldn’t i be able to put just

Disallow: /pagename ?

Yeah, that should work :thinking:

User-agent: *
Disallow: /pagename

Does Search Console take a bit of time to update and reflect changes?

I am thinking that might be the case, i will give google a bit time


This topic was automatically closed after 70 days. New replies are no longer allowed.