classesandcareers.net
robots.txt

Robots Exclusion Standard data for classesandcareers.net

Resource Scan

Scan Details

Site Domain classesandcareers.net
Base Domain classesandcareers.net
Scan Status Failed
Failure StageFetching resource.
Failure ReasonServer returned a server error.
Last Scan2024-04-14T08:32:21+00:00
Next Scan 2024-07-13T08:32:21+00:00

Last Successful Scan

Scanned2023-02-27T07:42:35+00:00
URL https://classesandcareers.net/robots.txt
Domain IPs 104.21.49.94, 172.67.143.157, 2606:4700:3031::ac43:8f9d, 2606:4700:3037::6815:315e
Response IP 104.21.49.94
Found Yes
Hash 367106e4141d7043c826dc1ea14e154cd25386dec2d8127a1be40bb70c240d43
SimHash 2294cb877664

Groups

adsbot-google

Rule Path
Disallow /schooldegrees/imperio.php

Other Records

Field Value
crawl-delay 10

googlebot

Rule Path
Disallow /schooldegrees/imperio.php

Other Records

Field Value
crawl-delay 10

googlebot-news

Rule Path
Disallow /schooldegrees/imperio.php

Other Records

Field Value
crawl-delay 10

googlebot-image

Rule Path
Disallow /schooldegrees/imperio.php

Other Records

Field Value
crawl-delay 10

googlebot-video

Rule Path
Disallow /schooldegrees/imperio.php

Other Records

Field Value
crawl-delay 10

googlebot-mobile

Rule Path
Disallow /schooldegrees/imperio.php

Other Records

Field Value
crawl-delay 10

midiapartners-google

Rule Path
Disallow /schooldegrees/imperio.php

Other Records

Field Value
crawl-delay 10

midiapartners

Rule Path
Disallow /schooldegrees/imperio.php

Other Records

Field Value
crawl-delay 10

slurp

Rule Path
Disallow /schooldegrees/imperio.php

Other Records

Field Value
crawl-delay 10

yahoo

Rule Path
Disallow /schooldegrees/imperio.php

Other Records

Field Value
crawl-delay 10

msnbot

Rule Path
Disallow /schooldegrees/imperio.php

Other Records

Field Value
crawl-delay 10

bingbot

Rule Path
Disallow /schooldegrees/imperio.php

Other Records

Field Value
crawl-delay 10

teoma

Rule Path
Disallow /schooldegrees/imperio.php

Other Records

Field Value
crawl-delay 10

*

Rule Path
Disallow /

Comments

  • See http://www.robotstxt.org/wc/norobots.html for documentation on how to
  • use the robots.txt file
  • To ban all spiders from the entire site uncomment the next two lines:
  • User-Agent: *
  • Disallow: /