www.cyber.airbus.com
robots.txt

Robots Exclusion Standard data for www.cyber.airbus.com

Resource Scan

Scan Details

Site Domain www.cyber.airbus.com
Base Domain airbus.com
Scan Status Ok
Last Scan2024-11-09T00:43:43+00:00
Next Scan 2024-11-23T00:43:43+00:00

Last Scan

Scanned2024-11-09T00:43:43+00:00
URL https://www.cyber.airbus.com/robots.txt
Redirect https://cyber.airbus.com:443/robots.txt
Redirect Domain cyber.airbus.com
Redirect Base airbus.com
Domain IPs 13.248.128.78, 76.223.0.206
Redirect IPs 45.60.244.88
Response IP 45.60.244.88
Found Yes
Hash 9b5b6c0f7bd96fe70bd8629923dfbab84d8bfa5ce082256f5de58a342c1febed
SimHash 75969b49cf40

Groups

alphaseobot
alphaseobot-sa
blexbot
alexibot
alvinetspider
antenne hatena
apocalxexplorerbot
asterias
backdoorbot/1.0
bizinformation
black hole
blowfish/1.0
botalot
builtbottough
bullseye/1.0
bunnyslippers
cegbfeieh
cheesebot
cherrypicker
cherrypickerelite/1.0
cherrypickerse/1.0
copyrightcheck
cosmos
crescent
crescent internet toolpak http ole control v.1.0
disco pump 3.1
dittospyder
dotbot
emailcollector
emailsiphon
emailwolf
erocrawler
exabot
extractorpro
flamingo_searchengine
foobot
grapeshot
harvest/1.5
hloader
httplib
httrack
httrack 3.0
humanlinks
igentia
infonavirobot
jennybot
jikespider
kenjin spider
lexibot
libweb/clshttp
linkextractorpro
linkscan/8.1a unix
linkwalker
lwp-trivial
lwp-trivial/1.34
mata hari
microsoft url control - 5.01.4511
microsoft url control - 6.00.8169
miixpc
miixpc/4.2
mister pix
mlbot
moget
moget/2.1
ms search 4.0 robot
ms search 5.0 robot
naverbot
netants
netattache
netmechanic
nicerspro
offline explorer
openfind
openindexspider
propowerbot/2.14
prowebwalker
psbot
quepasacreep
queryn metasearch
repomonkey
rma
sightupbot
sitebot
sitesnagger
sitesucker
spankbot
spanner
speedy
suggybot
superbot
superbot/2.6
suzuran
szukacz/1.4
teleport
telesoft
the intraformant
thenomad
tighttwatbot
titan
tocrawl/urldispatcher
toscrawler
trendictionbot
true_robot
true_robot/1.0
turingos
turnitinbot
urlpouls
urly warning
vci
web image collector
webauto
webbandit
webbandit/3.50
webcopier
webcopy
webenhancer
webmasterworldforumbot
webmirror
webreaper
websauger
website extractor
website quester
webster pro
webstripper
webstripper/2.02
webzip
wget
wikiofeedbot
winhttrack
www-collector-e
xenu link sleuth/1.3.8
yisouspider
yacy
yrspider
zeus
zookabot

Rule Path
Disallow /

*

Rule Path
Allow /core/*.css$
Allow /core/*.css?
Allow /core/*.js$
Allow /core/*.js?
Allow /core/*.gif
Allow /core/*.jpg
Allow /core/*.jpeg
Allow /core/*.png
Allow /core/*.svg
Allow /profiles/*.css$
Allow /profiles/*.css?
Allow /profiles/*.js$
Allow /profiles/*.js?
Allow /profiles/*.gif
Allow /profiles/*.jpg
Allow /profiles/*.jpeg
Allow /profiles/*.png
Allow /profiles/*.svg
Disallow /core/
Disallow /profiles/
Disallow /README.txt
Disallow /web.config
Disallow /admin/
Disallow /comment/reply/
Disallow /filter/tips
Disallow /node/add/
Disallow /search/
Disallow /user/register/
Disallow /user/password/
Disallow /user/login/
Disallow /user/logout/
Disallow /index.php/admin/
Disallow /index.php/comment/reply/
Disallow /index.php/filter/tips
Disallow /index.php/node/add/
Disallow /index.php/search/
Disallow /index.php/user/password/
Disallow /index.php/user/register/
Disallow /index.php/user/login/
Disallow /index.php/user/logout/
Disallow /*/node/

Other Records

Field Value
sitemap https://cyber.airbus.com/sitemap.xml

Comments

  • robots.txt
  • This file is to prevent the crawling and indexing of certain parts
  • of your site by web crawlers and spiders run by sites like Yahoo!
  • and Google. By telling these "robots" where not to go on your site,
  • you save bandwidth and server resources.
  • This file will be ignored unless it is at the root of your host:
  • Used: http://example.com/robots.txt
  • Ignored: http://example.com/site/robots.txt
  • For more information about the robots.txt standard, see:
  • http://www.robotstxt.org/robotstxt.html
  • CSS, JS, Images
  • Directories
  • Files
  • Paths (clean URLs)
  • Paths (no clean URLs)

Warnings

  • 1 invalid line.