How do you disallow a specific page in robot.txt

What is the correct way to disallow a page:

is it
Disallow: /pagename
Disallow: /pagename/
Disallow: pagename
Disallow: /https://sitename/pagename

I tried all four, deployed the new version and tested live URL on search console and it says it can be crawled and indexed. Can anyone please tell me what i am missing here?

Did you put a line for User-agent as well? The bare minimum for a robots.txt file, from my understanding, is:

User-agent: [user-agent name]
Disallow: [URL string not to be crawled]

For a specific page, example is:

User-agent: *
Disallow: /example-subfolder/blocked-page.html

Reference: https://moz.com/learn/seo/robotstxt

yes, I have
User-agent: *

since bubble page ends dont have a .html shouldn’t i be able to put just

Disallow: /pagename ?

Yeah, that should work :thinking:

User-agent: *
Disallow: /pagename

Does Search Console take a bit of time to update and reflect changes?

I am thinking that might be the case, i will give google a bit time

Maybe: https://support.google.com/webmasters/answer/6065812?visit_id=636907684730586245-2472326800&rd=1

This topic was automatically closed after 70 days. New replies are no longer allowed.