wageindicator.org
robots.txt

Robots Exclusion Standard data for wageindicator.org

Resource Scan

Scan Details

Site Domain wageindicator.org
Base Domain wageindicator.org
Scan Status Ok
Last Scan2024-11-07T02:56:54+00:00
Next Scan 2024-11-14T02:56:54+00:00

Last Scan

Scanned2024-11-07T02:56:54+00:00
URL https://wageindicator.org/robots.txt
Domain IPs 139.162.181.223, 2a01:7e01::f03c:91ff:feca:3bd2
Response IP 139.162.181.223
Found Yes
Hash 8dc686f6faec8eb85104e7e9de1914b7ec895448eaad2e46d9c6e13644b94690
SimHash ae91aa534d65

Groups

*

Rule Path
Disallow

Other Records

Field Value
crawl-delay 4

www.deadlinkchecker.com

No rules defined. All paths allowed.

Other Records

Field Value
crawl-delay 1

googlebot

Rule Path
Disallow /*atct_album_view$
Disallow /*folder_factories$
Disallow /*folder_summary_view$
Disallow /*login_form$
Disallow /*mail_password_form$
Disallow /%40%40search
Disallow /*search_rss$
Disallow /*sendto_form$
Disallow /*summary_view$
Disallow /*thumbnail_view$
Disallow /*?job-id=*
Disallow /google-search-result?q=*
Disallow /*archive-no-index$

Other Records

Field Value
sitemap https://wageindicator.org/sitemap.xml.gz

Comments

  • Define access-restrictions for robots/spiders
  • http://www.robotstxt.org/wc/norobots.html
  • By default we allow robots to access all areas of our site
  • already accessible to anonymous users
  • Add Googlebot-specific syntax extension to exclude forms
  • that are repeated for each piece of content in the site
  • the wildcard is only supported by Googlebot
  • http://www.google.com/support/webmasters/bin/answer.py?answer=40367&ctx=sibling
  • we want pages like our landing pages to be indexed (?job-id=7412100000000)
  • Disallow: /*?
  • waarschijnlijk kan het geen kwaad om deze "view" aan het einde van de URL te indexeren?
  • Disallow: /*view$
  • Do not index archive folders with this ID