itson.mx
robots.txt

Robots Exclusion Standard data for itson.mx

Resource Scan

Scan Details

Site Domain itson.mx
Base Domain itson.mx
Scan Status Ok
Last Scan2024-11-14T13:56:30+00:00
Next Scan 2024-12-14T13:56:30+00:00

Last Scan

Scanned2024-11-14T13:56:30+00:00
URL https://itson.mx/robots.txt
Domain IPs 20.110.129.15
Response IP 20.110.129.15
Found Yes
Hash 38736b775110e8bfcbab5a5330847d11a0482697a033345e888eab164cf34270
SimHash 6a1e94704531

Groups

semrushbot

Rule Path
Disallow /

yandex

Rule Path
Disallow /

mj12bot

Rule Path
Disallow /

googlebot

Rule Path
Disallow /*?*
Disallow /*folder_factories$
Disallow /*send_as_pdf*
Disallow /*download_as_pdf*
Disallow /parametrages/
Disallow /newsletter/
Disallow /abonnez-vous/
Disallow /don-en-ligne/
Disallow /portal_checkouttool/
Disallow /Members/

mediapartners-google

Rule Path
Disallow
Allow /
Disallow /*folder_factories$
Disallow /*send_as_pdf*
Disallow /*download_as_pdf*
Disallow /parametrages/
Disallow /newsletter/
Disallow /abonnez-vous/
Disallow /don-en-ligne/
Disallow /portal_checkouttool/
Disallow /Members/

yahoo! slurp

Rule Path
Disallow /*?*
Disallow /*folder_factories$
Disallow /*send_as_pdf*
Disallow /*download_as_pdf*
Disallow /parametrages/
Disallow /newsletter/
Disallow /abonnez-vous/
Disallow /don-en-ligne/
Disallow /portal_checkouttool/
Disallow /Members/

Other Records

Field Value
crawl-delay 10

bingbot

Rule Path
Disallow /*?*
Disallow /*folder_factories$
Disallow /*send_as_pdf*
Disallow /*download_as_pdf*
Disallow /parametrages/
Disallow /newsletter/
Disallow /abonnez-vous/
Disallow /don-en-ligne/
Disallow /portal_checkouttool/
Disallow /Members/

Other Records

Field Value
crawl-delay 10

baiduspider

Rule Path
Disallow /*?*
Disallow /*folder_factories$
Disallow /*send_as_pdf*
Disallow /*download_as_pdf*
Disallow /parametrages/
Disallow /newsletter/
Disallow /abonnez-vous/
Disallow /don-en-ligne/
Disallow /portal_checkouttool/
Disallow /Members/

Other Records

Field Value
crawl-delay 10

msnbot

Rule Path
Disallow /

mj12bot

Rule Path
Disallow /

ahrefsbot

Rule Path
Disallow /

sogou spider

Rule Path
Disallow /

seokicks-robot

Rule Path
Disallow /

seokicks

Rule Path
Disallow /

discobot

Rule Path
Disallow /

blekkobot

Rule Path
Disallow /

blexbot

Rule Path
Disallow /

sistrix crawler

Rule Path
Disallow /

uptimerobot/2.0

Rule Path
Disallow /

ezooms robot

Rule Path
Disallow /

perl lwp

Rule Path
Disallow /

netestate ne crawler

Rule Path
Disallow /

wiseguys robot

Rule Path
Disallow /

turnitin robot

Rule Path
Disallow /

exabot

Rule Path
Disallow /

yandex

Rule Path
Disallow /

babya discoverer

Rule Path
Disallow /

*

Rule Path
Disallow /parametrages/
Disallow /newsletter/
Disallow /abonnez-vous/
Disallow /don-en-ligne/
Disallow /portal_checkouttool/
Disallow /Members/

Other Records

Field Value
crawl-delay 10

Comments

  • Define access-restrictions for robots/spiders
  • http://www.robotstxt.org/wc/norobots.html
  • see http://opensourcehacker.com/2009/08/07/seo-tips-query-strings-multiple-languages-forms-and-other-content-management-system-issues/
  • Googlebot allows regex in its syntax
  • Block all URLs including query strings (? pattern) - contentish objects expose query string only for actions or status reports which
  • might confuse search results.
  • This will also block ?set_language
  • Allow Adsense bot on entire site
  • Block MJ12bot as it is just noise
  • Block Ahrefs
  • Block Sogou
  • Block SEOkicks
  • SEOkicks
  • Dicoveryengine.com
  • Blekkobot
  • Block BlexBot
  • Block SISTRIX
  • Block Uptime robot
  • Block Ezooms Robot
  • Block Perl LWP
  • Block netEstate NE Crawler
  • Block WiseGuys Robot
  • Block Turnitin Robot
  • Exabot
  • Yandex
  • Babya Discoverer
  • Directories
  • Request-rate: defines pages/seconds to be crawled ratio. 1/20 would be 1 page in every 20 second.
  • Crawl-delay: defines howmany seconds to wait after each succesful crawling.
  • Visit-time: you can define between which hours you want your pages to be crawled. Example usage is: 0100-0330 which means that pages will be indexed between 01:00 AM - 03:30 AM GMT.

Warnings

  • 2 invalid lines.
  • `request-rate` is not a known field.
  • `visit-time` is not a known field.