zoomcreator.com
robots.txt

Robots Exclusion Standard data for zoomcreator.com

Resource Scan

Scan Details

Site Domain zoomcreator.com
Base Domain zoomcreator.com
Scan Status Ok
Last Scan2024-10-09T21:12:32+00:00
Next Scan 2024-11-08T21:12:32+00:00

Last Scan

Scanned2024-10-09T21:12:32+00:00
URL http://zoomcreator.com/robots.txt
Redirect http://www.zoomcreator.com/robots.txt
Redirect Domain www.zoomcreator.com
Redirect Base zoomcreator.com
Domain IPs 65.254.227.240
Redirect IPs 65.254.227.240
Response IP 65.254.227.240
Found Yes
Hash 695b24e8222fe0e5ec4911ab6bec3017931d5de531b85f0bcc1d88842ed3a2f6
SimHash a110dd5149ef

Groups

vscooter

Rule Path
Disallow /

dittospyder

Rule Path
Disallow /

googlebot-image

Rule Path
Disallow /

psbot

Rule Path
Disallow /

slurp

Rule Path
Disallow /favicon.ico

iconsurf

Rule Path
Disallow /favicon.ico

*

Rule Path
Disallow /webstatsbak/
Disallow /downloads_new/
Disallow /jobs/
Disallow /trap/
Disallow /test/
Disallow /survey/
Disallow /secure/
Disallow /poll/
Disallow /newsite/
Disallow /mobile/
Disallow /language/
Disallow /javascript/
Disallow /images/
Disallow /frame_cookie/
Disallow /forum/
Disallow /faq/
Disallow /errors/
Disallow /email/
Disallow /data/
Disallow /css/
Disallow /copyright/
Disallow /cgi-bin/
Disallow /bots/
Disallow /awstats/
Disallow /advertising/
Disallow /accessible/
Disallow /404/
Disallow /etc/
Disallow /htpasswd/
Disallow /logs/
Disallow /portal/

Other Records

Field Value
sitemap http://www.zoomcreator.com/sitemap.xml

Comments

  • Welcome to ZOOMCREATOR.COM!
  • Robots.txt File created 2008/3/30
  • Beware! Bad Bots will be sent else where!!
  • Any unauthorized bot running will result in IP's being banned. Agent spoofing is considered a bot - if it looks like a bot and not from an SE - it is a bot.
  • Honey pots are - and have been - running. If your access has been blocked for bot running - please private message me at http://developer.zoomcreator.com
  • with a reinclusion request. #
  • list of known bot user agents robotstxt.org/db.html
  • Some Handy Hints for your robots text file
  • Keeping Your Images From Being Indexed
  • favicon disallow yahoo
  • favicon disallow iconsurf.com
  • Stop intrusive spy survey bot
  • User-agent: SurveyBot
  • Disallow: /
  • Several major crawlers support a Crawl-delay parameter, set to the number of seconds to wait between successive requests to the same server:
  • User-agent: *
  • Crawl-delay: 10
  • Example
  • ask jeeves crawler
  • User-agent: Teoma
  • Crawl-Delay: 10
  • Preventing your pages being cached from the Internet Archive's spider alexa.com archive.org
  • Example
  • User-agent: ia_archiver
  • Disallow: /
  • Some major crawlers support an Allow directive which can counteract a previous Disallow directive.This is useful when you disallow an entire directory but still want some documents in that directory crawled and indexed.
  • Example
  • User-agent: Googlebot
  • Disallow: /folder1/
  • Allow: /folder1/myfile.html
  • Sitemap: http ://www.example.com/sitemap.xml.gz