en.allconstructions.com
robots.txt

Robots Exclusion Standard data for en.allconstructions.com

Resource Scan

Scan Details

Site Domain en.allconstructions.com
Base Domain allconstructions.com
Scan Status Ok
Last Scan2024-10-29T04:56:40+00:00
Next Scan 2024-11-28T04:56:40+00:00

Last Scan

Scanned2024-10-29T04:56:40+00:00
URL https://en.allconstructions.com/robots.txt
Domain IPs 37.156.219.92
Response IP 37.156.219.92
Found Yes
Hash 56fbbb9b22033f24c35044ebbb352c46bbec724dc809d332165d870288e555e5
SimHash 841d69154015

Groups

*

Rule Path
Disallow /account
Disallow /account/
Disallow /admin/
Disallow /admin
Disallow /user/
Disallow /user
Disallow /users/
Disallow /users
Disallow /session
Disallow /session/
Disallow /newsletter/
Disallow /portal/viesbuciu-rezervavimas
Disallow /*/article_print/
Disallow /*/product_print/
Disallow /*/company_print/
Disallow /blokai
Disallow /blokai/
Disallow /blocks
Disallow /blocks/

slurp

Rule Path
Disallow /*.css$

Other Records

Field Value
crawl-delay 5

twiceler.

Rule Path
Disallow /*.css$

Other Records

Field Value
crawl-delay 5

twiceler

Rule Path
Disallow /*.css$

Other Records

Field Value
crawl-delay 5

aport

Rule Path
Disallow /*.css$

Other Records

Field Value
crawl-delay 20

stackrambler

Rule Path
Disallow /*.css$

Other Records

Field Value
crawl-delay 20

msnbot

Rule Path
Disallow /*.css$

Other Records

Field Value
crawl-delay 10

bingbot

Rule Path
Disallow /*.css$

Other Records

Field Value
crawl-delay 10

kalooga

Rule Path
Disallow /
Disallow /*

*

Rule Path
Disallow /*.css$

baiduspider

Rule Path
Disallow /
Disallow /*

baiduspider-image

Rule Path
Disallow /
Disallow /*

baiduspider-ads

Rule Path
Disallow /
Disallow /*

ahrefsbot

Rule Path
Disallow /

yandex

No rules defined. All paths allowed.

Other Records

Field Value
crawl-delay 2

bytespider

Rule Path
Disallow /

Other Records

Field Value
crawl-delay 7

*

No rules defined. All paths allowed.

Other Records

Field Value
sitemap http://en.allconstructions.com/sitemap.xml.gz

Comments

  • See http://www.robotstxt.org/wc/norobots.html for documentation on how to use the robots.txt file
  • To ban all spiders from the entire site uncomment the next two lines:
  • User-Agent: *
  • Disallow: /