darykpats.lt
robots.txt

Robots Exclusion Standard data for darykpats.lt

Resource Scan

Scan Details

Site Domain darykpats.lt
Base Domain darykpats.lt
Scan Status Ok
Last Scan2024-11-11T15:26:56+00:00
Next Scan 2024-11-18T15:26:56+00:00

Last Scan

Scanned2024-11-11T15:26:56+00:00
URL https://www.darykpats.lt/robots.txt
Domain IPs 37.156.219.92
Response IP 37.156.219.92
Found Yes
Hash 90a21cbf630736ce8e2a98cac06f77ef56dcbf9f1ed910c31127235d43aab3e5
SimHash 621448154064

Groups

*

Rule Path
Disallow /wp-admin/
Disallow /wp-admin
Disallow /wp-content/
Disallow /wp-content
Disallow /blokai
Disallow /blokai/
Disallow /blocks
Disallow /blocks/

slurp

Rule Path
Disallow /*.css$

Other Records

Field Value
crawl-delay 5

twiceler.

Rule Path
Disallow /*.css$

Other Records

Field Value
crawl-delay 5

twiceler

Rule Path
Disallow /*.css$

Other Records

Field Value
crawl-delay 5

aport

Rule Path
Disallow /*.css$

Other Records

Field Value
crawl-delay 20

stackrambler

Rule Path
Disallow /*.css$

Other Records

Field Value
crawl-delay 20

msnbot

Rule Path
Disallow /*.css$

Other Records

Field Value
crawl-delay 10

bingbot

Rule Path
Disallow /*.css$

Other Records

Field Value
crawl-delay 10

kalooga

Rule Path
Disallow /
Disallow /*

*

Rule Path
Disallow /*.css$

baiduspider

Rule Path
Disallow /
Disallow /*

baiduspider-image

Rule Path
Disallow /
Disallow /*

baiduspider-ads

Rule Path
Disallow /
Disallow /*

ahrefsbot

Rule Path
Disallow /

yandex

No rules defined. All paths allowed.

Other Records

Field Value
crawl-delay 2

bytespider

Rule Path
Disallow /

Other Records

Field Value
crawl-delay 7

Comments

  • See http://www.robotstxt.org/wc/norobots.html for documentation on how to use the robots.txt file
  • To ban all spiders from the entire site uncomment the next two lines:
  • User-Agent: *
  • Disallow: /
  • User-agent: *
  • Sitemap: http://www.darykpats.lt/sitemap.xml.gz