Robots.txt could be a computer file internet masters produce to instruct web robots (typically program robots) a way to crawl pages on their website.
The robots.txt file is an element of the robots exclusion protocol (REP), a gaggle of internet standards that regulate however robots crawl the online, access and index content, and serve that content up to users.
The REP conjointly includes directives like meta robots, yet as page-, subdirectory-, or site-wide directions for a way search engines ought to treat links (such as “follow” or “Nofollow”).
I follow robots.txt files indicate whether or not bound user agents (web-crawling software) will or cannot crawl components of an internet site.
These crawl directions area unit nominative by “disallowing” or “allowing” the behavior of bound (or all) user agents.
User-agent: [user-agent name]
Disallow: [URL string not to be crawled]
The importance of a robots.txt file
You might be stunned to listen to that one little document, called robots.txt, might be the downfall of your website.
If you get the file wrong you’ll find yourself telling computer programme robots to not crawl your website, which means your websites won’t seem within the search results.
Therefore, it’s necessary that you simply perceive the aim of a robots.txt enter SEO and find out how to envision you’re victimization it properly.
A robots.txt file provides directions to net robots regarding the pages the website owner doesn’t want to be ‘crawled’. for example, if you didn’t need your pictures to be listed by Google and alternative search engines, you’d block them victimization your robots.txt file.
How does it work?
Before a look engine crawls your website, it’ll look into your robots.txt file as directions on wherever they’re allowed to crawl visit and index save on the programme results.
Robots.txt files square measure useful:
1. If you wish search engines to ignore any duplicate pages on your website.
2. If you don’t wish search engines to index your internal search results pages.
3. If you don’t wish search engines to index bound areas of your website or an entire website.
4. If you don’t wish search engines to index bound files on your website pictures, PDFs, etc.
5. If you wish to inform search engines wherever your sitemap is found.
How to create a robots.txt file
If you’ve found that you just don’t presently have a robots.txt file, I’d advise you to form one as presently as potential. you’ll want to:
1. produce a replacement document and put it aside because the name “ – you’ll use the pad program on Windows PCs or TextEdit for Macs then “Save As” a text-delimited file.
2. transfer it to the foundation directory of your website – this can be typically a root level folder referred to as “Htdocs” or “www” that makes it seem directly when your name.
3. If you utilize subdomains, you’ll produce a robots.txt file for every subdomain.
Testing Your Robots.txt File
You can take a look at your robots.txt file to make sure it works as you expect it to we’d advocate you are doing this along with your robots.txt file even though you think that it’s all correct.
The testing tool was created by Google to permit webmasters to ascertain their robots.txt file. to check your robots.txt file, you’ll have to be compelled to have the location to that it’s applied registered with Google Webmaster Tools this is actually helpful, therefore you must have this created already.
You then merely choose the location from the list and Google can come back notes for you wherever it highlights any errors.
- Test your robots.txt file exploitation the Google Robots.txt Tester.
What happens if you have no robots.txt file?
Without a robots.txt file search engines can have a free run to crawl and index something they realize on the website.
This is often fine for many websites however it’s specific application to a minimum of suggests wherever your XML sitemap is thus searching engines can realize new content while not having to slowly crawl through all the pages on your website and bumping into them days later.