In the user-agent string, you'll see “Amazonbot” together with additional agent information. An example looks like this:
Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/600.2.5 (KHTML\, like Gecko) Version/8.0.2 Safari/600.2.5 (Amazonbot/0.1; +https://developer.amazon.com/support/amazonbot)
User-agent: Amazonbot # Amazon's user agent
Disallow: /do-not-crawl/ # disallow this directory
User-agent: * # any robot
Disallow: /not-allowed/ # disallow this directory
Amazonbot attempts to read robots.txt at the host level (e.g., ‘test.amazon.com’), however it does not look for this file at the domain level (e.g., ‘amazon.com’). In the event Amazonbot cannot fetch robots.txt due to IP or user agent blocking, parsing errors, network timeouts, or any other non-successful status codes (such as 3XX, 4XX or 5XX), Amazonbot will attempt to refetch robots.txt or use a cached copy from the last 30 days. If both these approaches fail, Amazonbot will behave as if robots.txt does not exist and will crawl the site. When accessible, Amazonbot will respond to changes in robots.txt files within 24 hours.
<a href="signin.php" rel=nofollow>Sign in </a>
...
To learn more about partnering with Amazon on Alexa and other virtual assistant use cases, feel free to contact us at pubpartnerships@amazon.com. If you are a content owner, please identify your domain(s), RSS feed links, and/or other references to your content within your message.
Verify that a crawler accessing your server is the official Amazonbot crawler by using DNS lookups. This helps you identify other bots or malicious agents that may be accessing your site while claiming to be Amazonbot.
You can use command line tools to verify Amazonbot by following these steps:
For example:
$ host 12.34.56.789
789.56.34.12.in-addr.arpa domain name pointer 12-34-56-789.crawl.amazonbot.amazon.
$ host 12-34-56-789.crawl.amazonbot.amazon
12-34-56-789.crawl.amazonbot.amazon has address 12.34.56.789