How to Create The robots.txt File | How To Set Rule For robots.txt File ?
as we discuss in the previous tutorial what is the use of the robot.txt file, and why we use for our website and, why is beneficial for “search engine optimization “ too. But in this post, I will share you how to create a robots.txt file for our website if not created yet and will know how to set rules for our website and pages through robots file.
There are two important attributes of Robots.txt is allow and disallow.
But I want share summary of the robots.txt file for you if you are not aware of this. Robots.txt files are those file which is the entrance point for any website that help to Google bots to crawling and indexing and which part of our website want to index and which have not to index.
So be careful if you are working on robots.txt files because small mistake your website can stop working and also will not index.
Recommendation: Always make sure before any changes in the robots.txt file.
What Can be Wrong if Not To Use robots.txt file?
- Your website sensitive data will get crawl and anyone can see.
- Irrelevant data will get index in google, so sometimes its hurt the google because there is no useful data for the audience.
- Maybe a problem of Indexing and crawling.
- And many more
How to Create Robots.txt files?
If you don’t have robots.txt files for your website then can easily create the robots file for your website.
Follow the steps :
- Go to your website root directory or folder on the server.
- Your website root directory may look like /public_html.A
- Click on create new files and create give the name robots.txt.
- Note: robots.txt file always has extension .txt format.
- Now after successfully create robots file, you can check by entering http://www.yoursitename.com/robots.txt.
Q1. How to stop indexing for all site?
Ans. User-agent: *
This means you are stopped indexing your site for all Search Engine and spiders.
Q 2. How to Allow Everything On The Website To Be Crawled & Indexed?
Ans: User-agent: *
Note: astric (*) used for all Search engine spiders and you may also define for the specific spider to allow and indexing.
Q 3. How to Block specific file and folder?
Ans: User-agent: *
OR User-agent: *
Q 4. How to allow for specific Search Engine spider or Bots?
Ans: User-agent: Googlebot
This means Google will crawl everything for your site, and other website bots like Yahoo, Bing etc will not crawl your site.
Be in touch with us for more about Technology knowledge !!