About Amazonbot

Amazonbot is Amazon's web crawler used to improve our services, such as enabling Alexa to answer even more questions for customers. Amazonbot respects standard robots.txt rules.

How Can I Identify Amazonbot?

In the user-agent string, you'll see “Amazonbot” together with additional agent information. An example looks like this:

Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/600.2.5 (KHTML\, like Gecko) Version/8.0.2 Safari/600.2.5 (Amazonbot/0.1; +https://developer.amazon.com/support/amazonbot)

 

How Can I Control What Amazonbot Crawls on my Site?

Robots.txt: Amazonbot respects the robots.txt directives user-agent and Disallow. In the example below, Amazonbot won't crawl documents that are under /do-not-crawl/ or /not-allowed:

User-agent: Amazonbot               # Amazon's user agent
Disallow: /do-not-crawl/            # disallow this directory

User-agent: *                # any robot
Disallow: /not-allowed/      # disallow this directory

 

AmazonBot does not support the crawl-delay directive in robots.txt and robots meta tags on HTML pages such as “nofollow” and "noindex".

Link-Level Rel Parameter: Amazonbot supports the link-level rel=nofollow directive. Include these in your HTML like this to keep Amazonbot for following and crawling a particular link from your website.

<a href="signin.php" rel=nofollow>Sign in </a>
...

 

Contact Us

If you have questions or concerns, please contact us. If you are a content owner, please always include any relevant domain names in your message.