score:0

Accepted answer

where several user agents are recognized in the robots.txt file, google will follow the most specific.

if you want all of google to be able to crawl your pages, you don't need a robots.txt file at all.

if you want to block or allow all of google's crawlers from accessing some of your content, you can do this by specifying googlebot as the user agent.

user-agent: googlebot
disallow:

incase cloudflare is responsible for preventing googlebot from crawling, you can do the below settings. go to firewall settings > managed rules, and turn off cloudflare specials

disable the rules individually so you don’t lose all the other cloudflare specials benefits. for reference please check cloudflare managed special rules


Related Query

More Query from same tag