X

Robots Text Files (robots.txt)

Web robots, bots, crawlers or spiders are applications written to traverse the Web automatically. In essence, Web robots are a means of obtaining information about Web sites. The most popular kind of robot you're familiar with are the search engines.

There are two methods of controlling these robots. To a degree, you have some control over these programs via inserting a robots.txt file at your server's root directory. Another means of steering these programs to scan or not to scan your sites is to use an accepted meta tag within your web pages. Below is an example of this way of handling robots:

See also: robotstxt.org