alltravels.com
robots.txt

Robots Exclusion Standard data for alltravels.com

Resource Scan

Scan Details

Site Domain alltravels.com
Base Domain alltravels.com
Scan Status Failed
Failure StageFetching resource.
Failure ReasonServer returned a server error.
Last Scan2024-09-22T08:17:25+00:00
Next Scan 2024-11-21T08:17:25+00:00

Last Successful Scan

Scanned2024-07-18T07:58:42+00:00
URL https://alltravels.com/robots.txt
Domain IPs 104.21.2.156, 172.67.129.91, 2606:4700:3032::6815:29c, 2606:4700:3032::ac43:815b
Response IP 104.21.2.156
Found Yes
Hash aa6b2a87864f1137b6a240c8ed7818861846f1bec3f4c9ee55af76690b2fa86e
SimHash 1a14d9054dd1

Groups

*

Rule Path
Allow /

mediapartners-google

Rule Path
Allow /

changedetection

Rule Path
Disallow /

yahoo pipes 2.0

Rule Path
Disallow /

*

Rule Path
Disallow /blackhole/
Disallow /*?*blackhole=

*

No rules defined. All paths allowed.

Other Records

Field Value
crawl-delay 10

bingbot

No rules defined. All paths allowed.

Other Records

Field Value
crawl-delay 10

msnbot

No rules defined. All paths allowed.

Other Records

Field Value
crawl-delay 10

baiduspider

Rule Path
Disallow /

semrushbot

Rule Path
Disallow /

ahrefs

Rule Path
Disallow /

dataforseo

Rule Path
Disallow /

Comments

  • will remove documents from your domain from the Wayback Machine
  • user agent for Alexa/Amazon also
  • User-agent: ia_archiver
  • Disallow: /
  • Disallow: /folder/
  • https://blogs.bing.com/webmaster/2009/08/10/crawl-delay-and-the-bing-crawler-msnbot/
  • https://www.bing.com/webmasters/crawlcontrol?siteUrl=https://www.alltravels.com/
  • Crawl-delay: 10
  • Semrush
  • Crawl-delay: 10
  • Ahrefs
  • DataForSeoBot
  • Sitemap: http://www.alltravels.com/sitemap.xml
  • This directive is independent of the user-agent line, so it does not matter where you place it in your file.
  • If you have a Sitemap index file, you can include the location of just that file.
  • You do not need to list each individual Sitemap listed in the index file