# Example robots.txt from (mt) Media Temple # Learn more at http://mediatemple.net # (mt) Forums - http://forum.mediatemple.net/ # (mt) System Status - http://status.mediatemple.net # (mt) Statement of Support - http://mediatemple.net/support/statement/ # How do I check that my robots.txt file is working as expected # http://www.google.com/support/webmasters/bin/answer.pyanswer=35237 # For a list of Robots please visit: http://www.robotstxt.org/db.html # Instructions # Remove the "#" to uncomment any line that you wish to use, but be sure not to uncomment the Description. # Grant Robots Access ####################################################################################### # This example allows all robots to visit all files because the wildcard "*" specifies all robots: #User-agent: * #Disallow: #To allow a single robot you would use the following: #User-agent: Google #Disallow: #User-agent: * #Disallow: / # Deny Robots Access ####################################################################################### # This example keeps all robots out: User-agent: * Disallow: / # The next is an example that tells all crawlers not to enter into four directories of a website: #User-agent: * #Disallow: /cgi-bin/ #Disallow: /images/ #Disallow: /tmp/ #Disallow: /private/ # Example that tells a specific crawler not to enter one specific directory: #User-agent: BadBot #Disallow: /private/ # Example that tells all crawlers not to enter one specific file called foo.html #User-agent: * #Disallow: /var/www/vhosts/example.com/httpdocs/