velvetjobs.com
robots.txt

Robots Exclusion Standard data for velvetjobs.com

Resource Scan

Scan Details

Site Domain velvetjobs.com
Base Domain velvetjobs.com
Scan Status Ok
Last Scan2024-04-21T15:16:30+00:00
Next Scan 2024-05-21T15:16:30+00:00

Last Scan

Scanned2024-04-21T15:16:30+00:00
URL https://www.velvetjobs.com/robots.txt
Domain IPs 23.22.5.68, 3.226.182.14, 52.21.227.162, 54.237.159.171
Response IP 3.226.182.14
Found Yes
Hash 56eb5a774c9eafca65eb12f4cad6acd4bf5d6ced1436feac94358e14fe3eb7a9
SimHash ba2c2d8d7dd0

Groups

*

Rule Path
Disallow /users/auth/google_oauth2*
Disallow /users/auth/facebook*
Disallow /users/auth/linkedin*

Other Records

Field Value
sitemap http://velvetjobs.s3.amazonaws.com/sitemaps/sitemap.xml.gz

Comments

  • See http://www.robotstxt.org/wc/norobots.html for documentation on how to use the robots.txt file
  • To ban all spiders from the entire site uncomment the next two lines:
  • User-Agent: *
  • Disallow: /
  • User-Agent: *
  • Disallow: /
  • directory
  • people
  • jobs
  • Disallow oauth links

Warnings

  • `noindex` is not a known field.