wccls.bibliocommons.com
robots.txt

Robots Exclusion Standard data for wccls.bibliocommons.com

Resource Scan

Scan Details

Site Domain wccls.bibliocommons.com
Base Domain bibliocommons.com
Scan Status Ok
Last Scan2024-09-19T22:29:26+00:00
Next Scan 2024-10-19T22:29:26+00:00

Last Scan

Scanned2024-09-19T22:29:26+00:00
URL https://wccls.bibliocommons.com/robots.txt
Domain IPs 15.197.148.165, 99.83.171.230
Response IP 15.197.148.165
Found Yes
Hash a2f332f20c642e24a43b14f2b3495075041375d27f4b89b7e79a2683df118cff
SimHash 2a5e4a578f65

Groups

*

Rule Path
Disallow /layouts/
Disallow /holds/select_hold/
Disallow /item/report_match/
Disallow /info/select_library/
Disallow /item/digital_availability/
Disallow /item/show_circulation_widget/
Disallow /item/get_external_content/
Disallow /item/full_record/
Disallow /item/read_alike_list/
Disallow /item/load_ugc_content/
Disallow /item/syndetics_reviews
Disallow /item/awards_and_series_list
Disallow /item/bibs_from_isbns
Disallow /item/show_circulation/
Disallow /collection/add/
Disallow /bib/match/
Disallow /list/new/my/
Disallow /dashboard/browse/shelf_browse
Disallow /session/back_to
Disallow /header/state
Disallow /v2/availability/
Allow /*
Allow /sitemap.xml

Other Records

Field Value
crawl-delay 120

yandex

Rule Path
Disallow /

yandexbot

Rule Path
Disallow /

Other Records

Field Value
sitemap https://wccls.bibliocommons.com/sitemap
sitemap https://wccls.bibliocommons.com/events/sitemap

Comments

  • See http://www.robotstxt.org/wc/norobots.html for documentation on how to use the robots.txt file