Skip to main content

What is robots.txt?

About /robots.txt
In a nutshell

Web site owners use the /robots.txt file to give instructions about their site to web robots; this is called The Robots Exclusion Protocol.

It works likes this: a robot wants to vists a Web site URL, say http://www.example.com/welcome.html. Before it does so, it firsts checks for http://www.example.com/robots.txt, and finds:

User-agent: *
Disallow: /

The "User-agent: *" means this section applies to all robots. The "Disallow: /" tells the robot that it should not visit any pages on the site.

There are two important considerations when using /robots.txt:

    * robots can ignore your /robots.txt. Especially malware robots that scan the web for security vulnerabilities, and email address harvesters used by spammers will pay no attention.
    * the /robots.txt file is a publicly available file. Anyone can see what sections of your server you don't want robots to use.

So don't try to use /robots.txt to hide information.

See also:

    * Can I block just bad robots?
    * Why did this robot ignore my /robots.txt?
    * What are the security implications of /robots.txt?

The details

The /robots.txt is a de-facto standard, and is not owned by any standards body. There are two historical descriptions:

    * the original 1994 A Standard for Robot Exclusion document.
    * a 1997 Internet Draft specification A Method for Web Robots Control

In addition there are external resources:

    * HTML 4.01 specification, Appendix B.4.1
    * Wikipedia - Robots Exclusion Standard

The /robots.txt standard is not actively developed. See What about further development of /robots.txt? for more discussion.

The rest of this page gives an overview of how to use /robots.txt on your server, with some simple recipes. To learn more see also the FAQ.

How to create a /robots.txt file
Where to put it

The short answer: in the top-level directory of your web server.

The longer answer:

When a robot looks for the "/robots.txt" file for URL, it strips the path component from the URL (everything from the first single slash), and puts "/robots.txt" in its place.

For example, for "http://www.example.com/shop/index.html, it will remove the "/shop/index.html", and replace it with "/robots.txt", and will end up with "http://www.example.com/robots.txt".

So, as a web site owner you need to put it in the right place on your web server for that resulting URL to work. Usually that is the same place where you put your web site's main "index.html" welcome page. Where exactly that is, and how to put the file there, depends on your web server software.

Remember to use all lower case for the filename: "robots.txt", not "Robots.TXT.

See also:

    * What program should I use to create /robots.txt?
    * How do I use /robots.txt on a virtual host?
    * How do I use /robots.txt on a shared host?

What to put in it
The "/robots.txt" file is a text file, with one or more records. Usually contains a single record looking like this:

User-agent: *
Disallow: /cgi-bin/
Disallow: /tmp/
Disallow: /~joe/

In this example, three directories are excluded.

Note that you need a separate "Disallow" line for every URL prefix you want to exclude -- you cannot say "Disallow: /cgi-bin/ /tmp/" on a single line. Also, you may not have blank lines in a record, as they are used to delimit multiple records.

Note also that globbing and regular expression are not supported in either the User-agent or Disallow lines. The '*' in the User-agent field is a special value meaning "any robot". Specifically, you cannot have lines like "User-agent: *bot*", "Disallow: /tmp/*" or "Disallow: *.gif".

What you want to exclude depends on your server. Everything not explicitly disallowed is considered fair game to retrieve. Here follow some examples:
To exclude all robots from the entire server

User-agent: *
Disallow: /


To allow all robots complete access

User-agent: *
Disallow:

(or just create an empty "/robots.txt" file, or don't use one at all)
To exclude all robots from part of the server

User-agent: *
Disallow: /cgi-bin/
Disallow: /tmp/
Disallow: /junk/

To exclude a single robot

User-agent: BadBot
Disallow: /

To allow a single robot

User-agent: Google
Disallow:

User-agent: *
Disallow: /

To exclude all files except one
This is currently a bit awkward, as there is no "Allow" field. The easy way is to put all files to be disallowed into a separate directory, say "stuff", and leave the one file in the level above this directory:

User-agent: *
Disallow: /~joe/stuff/

Alternatively you can explicitly disallow all disallowed pages:

User-agent: *
Disallow: /~joe/junk.html
Disallow: /~joe/foo.html
Disallow: /~joe/bar.html

Comments

Popular posts from this blog

About the Robots META tag

About the Robots <META> tag In a nutshell You can use a special HTML <META> tag to tell robots not to index the content of a page, and/or not scan it for links to follow. For example: <html> <head> <title>...</title> <META NAME="ROBOTS" CONTENT="NOINDEX, NOFOLLOW"> </head> There are two important considerations when using the robots <META> tag:     * robots can ignore your <META> tag. Especially malware robots that scan the web for security vulnerabilities, and email address harvesters used by spammers will pay no attention.     * the NOFOLLOW directive only applies to links on this page. It's entirely likely that a robot might find the same links on some other page without a NOFOLLOW (perhaps on some other site), and so still arrives at your undesired page. Don't confuse this NOFOLLOW with the rel="nofollow" link attribute. The details Like the

seo services

Search engine optimization  is a art to develop your website & increase wieghtage  in searchengine. Seo  is a process to increase traffic on your site.seo is divided in two function as belove: On-page optimization  and off-page optimization lets discuss about it. (1) On-page optimization : On-page optimization is a first step of seo. on-page optimization is not only ranking activity but also increase your traffic .   (2) off-page optimization :      Off-page optimization is a second and very important tool in seo .we can drive traffic by off-page following activites included in off-page Directory submission,social bookmarking,press release,add posting,Article submission,classified ads etc. to more information: http://seomarketinggroup.blogspot.com/ 1.On-page Optimization Activities: On-page optimization is a first step of seo.  On-page optimization is not only ranking activity but also increase your traffi